00:00:00.000 Started by upstream project "autotest-nightly" build number 4355 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3718 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.229 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.239 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.251 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.251 > git config core.sparsecheckout # timeout=10 00:00:04.262 > git read-tree -mu HEAD # timeout=10 00:00:04.277 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.292 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.292 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.398 [Pipeline] Start of Pipeline 00:00:04.413 [Pipeline] library 00:00:04.414 Loading library shm_lib@master 00:00:04.414 Library shm_lib@master is cached. Copying from home. 00:00:04.427 [Pipeline] node 00:00:04.440 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.441 [Pipeline] { 00:00:04.452 [Pipeline] catchError 00:00:04.454 [Pipeline] { 00:00:04.466 [Pipeline] wrap 00:00:04.476 [Pipeline] { 00:00:04.484 [Pipeline] stage 00:00:04.485 [Pipeline] { (Prologue) 00:00:04.503 [Pipeline] echo 00:00:04.504 Node: VM-host-WFP7 00:00:04.510 [Pipeline] cleanWs 00:00:04.521 [WS-CLEANUP] Deleting project workspace... 00:00:04.521 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.527 [WS-CLEANUP] done 00:00:04.713 [Pipeline] setCustomBuildProperty 00:00:04.800 [Pipeline] httpRequest 00:00:05.146 [Pipeline] echo 00:00:05.147 Sorcerer 10.211.164.20 is alive 00:00:05.154 [Pipeline] retry 00:00:05.155 [Pipeline] { 00:00:05.165 [Pipeline] httpRequest 00:00:05.169 HttpMethod: GET 00:00:05.170 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.170 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.172 Response Code: HTTP/1.1 200 OK 00:00:05.172 Success: Status code 200 is in the accepted range: 200,404 00:00:05.172 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.606 [Pipeline] } 00:00:05.619 [Pipeline] // retry 00:00:05.625 [Pipeline] sh 00:00:05.911 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.925 [Pipeline] httpRequest 00:00:06.482 [Pipeline] echo 00:00:06.484 Sorcerer 10.211.164.20 is alive 00:00:06.493 [Pipeline] retry 00:00:06.494 [Pipeline] { 00:00:06.508 [Pipeline] httpRequest 00:00:06.513 HttpMethod: GET 00:00:06.514 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:06.514 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:06.516 Response Code: HTTP/1.1 200 OK 00:00:06.516 Success: Status code 200 is in the accepted range: 200,404 00:00:06.517 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:29.022 [Pipeline] } 00:00:29.041 [Pipeline] // retry 00:00:29.049 [Pipeline] sh 00:00:29.335 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:31.889 [Pipeline] sh 00:00:32.173 + git -C spdk log --oneline -n5 00:00:32.173 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:32.173 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:32.173 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:32.173 66289a6db build: use VERSION file for storing version 00:00:32.173 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:32.192 [Pipeline] writeFile 00:00:32.208 [Pipeline] sh 00:00:32.494 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.506 [Pipeline] sh 00:00:32.788 + cat autorun-spdk.conf 00:00:32.788 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.788 SPDK_RUN_ASAN=1 00:00:32.788 SPDK_RUN_UBSAN=1 00:00:32.788 SPDK_TEST_RAID=1 00:00:32.788 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.796 RUN_NIGHTLY=1 00:00:32.798 [Pipeline] } 00:00:32.811 [Pipeline] // stage 00:00:32.825 [Pipeline] stage 00:00:32.827 [Pipeline] { (Run VM) 00:00:32.840 [Pipeline] sh 00:00:33.125 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:33.125 + echo 'Start stage prepare_nvme.sh' 00:00:33.125 Start stage prepare_nvme.sh 00:00:33.125 + [[ -n 2 ]] 00:00:33.125 + disk_prefix=ex2 00:00:33.125 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:33.125 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:33.125 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:33.125 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.125 ++ SPDK_RUN_ASAN=1 00:00:33.125 ++ SPDK_RUN_UBSAN=1 00:00:33.125 ++ SPDK_TEST_RAID=1 00:00:33.125 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.125 ++ RUN_NIGHTLY=1 00:00:33.125 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:33.125 + nvme_files=() 00:00:33.125 + declare -A nvme_files 00:00:33.125 + backend_dir=/var/lib/libvirt/images/backends 00:00:33.125 + nvme_files['nvme.img']=5G 00:00:33.125 + nvme_files['nvme-cmb.img']=5G 00:00:33.125 + nvme_files['nvme-multi0.img']=4G 00:00:33.125 + nvme_files['nvme-multi1.img']=4G 00:00:33.125 + nvme_files['nvme-multi2.img']=4G 00:00:33.125 + nvme_files['nvme-openstack.img']=8G 00:00:33.125 + nvme_files['nvme-zns.img']=5G 00:00:33.125 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:33.125 + (( SPDK_TEST_FTL == 1 )) 00:00:33.125 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:33.125 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:33.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.125 + for nvme in "${!nvme_files[@]}" 00:00:33.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:33.385 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.385 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:33.385 + echo 'End stage prepare_nvme.sh' 00:00:33.385 End stage prepare_nvme.sh 00:00:33.398 [Pipeline] sh 00:00:33.684 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.684 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:33.684 00:00:33.684 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:33.684 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:33.684 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:33.684 HELP=0 00:00:33.684 DRY_RUN=0 00:00:33.684 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:33.684 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.684 NVME_AUTO_CREATE=0 00:00:33.684 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:33.684 NVME_CMB=,, 00:00:33.684 NVME_PMR=,, 00:00:33.684 NVME_ZNS=,, 00:00:33.684 NVME_MS=,, 00:00:33.684 NVME_FDP=,, 00:00:33.684 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.684 SPDK_VAGRANT_VMCPU=10 00:00:33.684 SPDK_VAGRANT_VMRAM=12288 00:00:33.684 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.684 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.684 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.684 SPDK_OPENSTACK_NETWORK=0 00:00:33.684 VAGRANT_PACKAGE_BOX=0 00:00:33.684 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.684 FORCE_DISTRO=true 00:00:33.684 VAGRANT_BOX_VERSION= 00:00:33.684 EXTRA_VAGRANTFILES= 00:00:33.684 NIC_MODEL=virtio 00:00:33.684 00:00:33.684 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:33.684 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:35.594 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.165 ==> default: Creating image (snapshot of base box volume). 00:00:36.165 ==> default: Creating domain with the following settings... 00:00:36.165 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734031758_6a5f33251c917be67df2 00:00:36.165 ==> default: -- Domain type: kvm 00:00:36.165 ==> default: -- Cpus: 10 00:00:36.165 ==> default: -- Feature: acpi 00:00:36.165 ==> default: -- Feature: apic 00:00:36.165 ==> default: -- Feature: pae 00:00:36.165 ==> default: -- Memory: 12288M 00:00:36.165 ==> default: -- Memory Backing: hugepages: 00:00:36.165 ==> default: -- Management MAC: 00:00:36.165 ==> default: -- Loader: 00:00:36.165 ==> default: -- Nvram: 00:00:36.165 ==> default: -- Base box: spdk/fedora39 00:00:36.165 ==> default: -- Storage pool: default 00:00:36.165 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734031758_6a5f33251c917be67df2.img (20G) 00:00:36.165 ==> default: -- Volume Cache: default 00:00:36.165 ==> default: -- Kernel: 00:00:36.165 ==> default: -- Initrd: 00:00:36.165 ==> default: -- Graphics Type: vnc 00:00:36.165 ==> default: -- Graphics Port: -1 00:00:36.165 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.165 ==> default: -- Graphics Password: Not defined 00:00:36.165 ==> default: -- Video Type: cirrus 00:00:36.165 ==> default: -- Video VRAM: 9216 00:00:36.165 ==> default: -- Sound Type: 00:00:36.165 ==> default: -- Keymap: en-us 00:00:36.165 ==> default: -- TPM Path: 00:00:36.165 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.165 ==> default: -- Command line args: 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.165 ==> default: -> value=-drive, 00:00:36.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.165 ==> default: -> value=-drive, 00:00:36.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.165 ==> default: -> value=-drive, 00:00:36.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.165 ==> default: -> value=-drive, 00:00:36.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.165 ==> default: -> value=-device, 00:00:36.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.427 ==> default: Creating shared folders metadata... 00:00:36.427 ==> default: Starting domain. 00:00:38.338 ==> default: Waiting for domain to get an IP address... 00:00:53.235 ==> default: Waiting for SSH to become available... 00:00:54.616 ==> default: Configuring and enabling network interfaces... 00:01:01.200 default: SSH address: 192.168.121.51:22 00:01:01.200 default: SSH username: vagrant 00:01:01.200 default: SSH auth method: private key 00:01:03.744 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:11.880 ==> default: Mounting SSHFS shared folder... 00:01:14.425 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.425 ==> default: Checking Mount.. 00:01:16.336 ==> default: Folder Successfully Mounted! 00:01:16.336 ==> default: Running provisioner: file... 00:01:17.277 default: ~/.gitconfig => .gitconfig 00:01:17.847 00:01:17.847 SUCCESS! 00:01:17.847 00:01:17.847 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.847 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.847 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:17.847 00:01:17.856 [Pipeline] } 00:01:17.870 [Pipeline] // stage 00:01:17.878 [Pipeline] dir 00:01:17.878 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:17.880 [Pipeline] { 00:01:17.890 [Pipeline] catchError 00:01:17.892 [Pipeline] { 00:01:17.903 [Pipeline] sh 00:01:18.184 + vagrant ssh-config --host vagrant 00:01:18.184 + sed -ne /^Host/,$p 00:01:18.184 + tee ssh_conf 00:01:20.717 Host vagrant 00:01:20.717 HostName 192.168.121.51 00:01:20.717 User vagrant 00:01:20.717 Port 22 00:01:20.717 UserKnownHostsFile /dev/null 00:01:20.717 StrictHostKeyChecking no 00:01:20.717 PasswordAuthentication no 00:01:20.717 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.717 IdentitiesOnly yes 00:01:20.717 LogLevel FATAL 00:01:20.717 ForwardAgent yes 00:01:20.717 ForwardX11 yes 00:01:20.717 00:01:20.730 [Pipeline] withEnv 00:01:20.731 [Pipeline] { 00:01:20.745 [Pipeline] sh 00:01:21.026 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.026 source /etc/os-release 00:01:21.026 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.026 # Minimal, systemd-like check. 00:01:21.026 if [[ -e /.dockerenv ]]; then 00:01:21.026 # Clear garbage from the node's name: 00:01:21.026 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.026 # $HOSTNAME is the actual container id 00:01:21.026 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.026 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:21.026 # We can assume this is a mount from a host where container is running, 00:01:21.026 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.026 container="$(< /etc/hostname) ($agent)" 00:01:21.026 else 00:01:21.026 # Fallback 00:01:21.026 container=$agent 00:01:21.026 fi 00:01:21.026 fi 00:01:21.026 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.026 00:01:21.298 [Pipeline] } 00:01:21.313 [Pipeline] // withEnv 00:01:21.320 [Pipeline] setCustomBuildProperty 00:01:21.332 [Pipeline] stage 00:01:21.334 [Pipeline] { (Tests) 00:01:21.346 [Pipeline] sh 00:01:21.627 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.902 [Pipeline] sh 00:01:22.188 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:22.480 [Pipeline] timeout 00:01:22.480 Timeout set to expire in 1 hr 30 min 00:01:22.483 [Pipeline] { 00:01:22.495 [Pipeline] sh 00:01:22.774 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:23.364 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:23.439 [Pipeline] sh 00:01:23.727 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.001 [Pipeline] sh 00:01:24.285 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:24.560 [Pipeline] sh 00:01:24.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:25.102 ++ readlink -f spdk_repo 00:01:25.102 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.102 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.102 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.102 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.102 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.102 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.102 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.102 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:25.102 + cd /home/vagrant/spdk_repo 00:01:25.102 + source /etc/os-release 00:01:25.102 ++ NAME='Fedora Linux' 00:01:25.102 ++ VERSION='39 (Cloud Edition)' 00:01:25.102 ++ ID=fedora 00:01:25.102 ++ VERSION_ID=39 00:01:25.102 ++ VERSION_CODENAME= 00:01:25.102 ++ PLATFORM_ID=platform:f39 00:01:25.102 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.102 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.102 ++ LOGO=fedora-logo-icon 00:01:25.102 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.102 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.102 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.102 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.102 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.102 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.102 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.102 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.102 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.102 ++ SUPPORT_END=2024-11-12 00:01:25.102 ++ VARIANT='Cloud Edition' 00:01:25.102 ++ VARIANT_ID=cloud 00:01:25.102 + uname -a 00:01:25.102 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.102 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.672 Hugepages 00:01:25.672 node hugesize free / total 00:01:25.672 node0 1048576kB 0 / 0 00:01:25.672 node0 2048kB 0 / 0 00:01:25.672 00:01:25.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.672 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:25.672 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:25.932 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:25.932 + rm -f /tmp/spdk-ld-path 00:01:25.932 + source autorun-spdk.conf 00:01:25.932 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.932 ++ SPDK_RUN_ASAN=1 00:01:25.932 ++ SPDK_RUN_UBSAN=1 00:01:25.932 ++ SPDK_TEST_RAID=1 00:01:25.932 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.932 ++ RUN_NIGHTLY=1 00:01:25.932 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.932 + [[ -n '' ]] 00:01:25.932 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:25.932 + for M in /var/spdk/build-*-manifest.txt 00:01:25.932 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:25.932 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.932 + for M in /var/spdk/build-*-manifest.txt 00:01:25.932 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.932 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.932 + for M in /var/spdk/build-*-manifest.txt 00:01:25.932 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.932 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.932 ++ uname 00:01:25.932 + [[ Linux == \L\i\n\u\x ]] 00:01:25.932 + sudo dmesg -T 00:01:25.932 + sudo dmesg --clear 00:01:25.932 + dmesg_pid=5429 00:01:25.932 + sudo dmesg -Tw 00:01:25.932 + [[ Fedora Linux == FreeBSD ]] 00:01:25.932 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.932 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.932 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.932 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.932 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.932 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.932 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.932 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.932 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.932 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.932 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.932 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.932 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.932 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.932 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.192 19:30:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:26.192 19:30:08 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.192 19:30:08 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:26.192 19:30:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:26.192 19:30:08 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.192 19:30:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:26.192 19:30:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.192 19:30:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.192 19:30:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.192 19:30:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.192 19:30:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.192 19:30:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.193 19:30:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.193 19:30:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.193 19:30:08 -- paths/export.sh@5 -- $ export PATH 00:01:26.193 19:30:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.193 19:30:08 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:26.193 19:30:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:26.193 19:30:08 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734031808.XXXXXX 00:01:26.193 19:30:08 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734031808.AU0fxT 00:01:26.193 19:30:08 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:26.193 19:30:08 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:26.193 19:30:08 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:26.193 19:30:08 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:26.193 19:30:08 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.193 19:30:08 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:26.193 19:30:08 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:26.193 19:30:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.193 19:30:08 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:26.193 19:30:08 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:26.193 19:30:08 -- pm/common@17 -- $ local monitor 00:01:26.193 19:30:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.193 19:30:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.193 19:30:08 -- pm/common@25 -- $ sleep 1 00:01:26.193 19:30:08 -- pm/common@21 -- $ date +%s 00:01:26.193 19:30:08 -- pm/common@21 -- $ date +%s 00:01:26.193 19:30:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734031808 00:01:26.193 19:30:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734031808 00:01:26.193 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734031808_collect-cpu-load.pm.log 00:01:26.193 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734031808_collect-vmstat.pm.log 00:01:27.131 19:30:09 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:27.131 19:30:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.131 19:30:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.131 19:30:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.131 19:30:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.131 Thu Dec 12 07:30:09 PM UTC 2024 00:01:27.131 19:30:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.131 v25.01-rc1-2-ge01cb43b8 00:01:27.131 19:30:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.131 19:30:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.131 19:30:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.131 19:30:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.131 19:30:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.391 ************************************ 00:01:27.391 START TEST asan 00:01:27.391 ************************************ 00:01:27.391 using asan 00:01:27.391 19:30:09 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:27.391 00:01:27.391 real 0m0.001s 00:01:27.391 user 0m0.000s 00:01:27.391 sys 0m0.000s 00:01:27.391 19:30:09 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.391 19:30:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.391 ************************************ 00:01:27.391 END TEST asan 00:01:27.391 ************************************ 00:01:27.391 19:30:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.391 19:30:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.391 19:30:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.391 19:30:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.391 19:30:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.391 ************************************ 00:01:27.391 START TEST ubsan 00:01:27.391 ************************************ 00:01:27.391 using ubsan 00:01:27.391 19:30:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:27.391 00:01:27.391 real 0m0.000s 00:01:27.391 user 0m0.000s 00:01:27.391 sys 0m0.000s 00:01:27.391 19:30:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.391 19:30:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.391 ************************************ 00:01:27.391 END TEST ubsan 00:01:27.391 ************************************ 00:01:27.391 19:30:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.391 19:30:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.391 19:30:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.391 19:30:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:27.651 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.651 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.219 Using 'verbs' RDMA provider 00:01:44.043 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:58.943 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:59.514 Creating mk/config.mk...done. 00:01:59.514 Creating mk/cc.flags.mk...done. 00:01:59.514 Type 'make' to build. 00:01:59.514 19:30:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:59.514 19:30:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:59.514 19:30:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:59.514 19:30:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.514 ************************************ 00:01:59.514 START TEST make 00:01:59.514 ************************************ 00:01:59.514 19:30:42 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:11.726 The Meson build system 00:02:11.726 Version: 1.5.0 00:02:11.726 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.726 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.726 Build type: native build 00:02:11.726 Program cat found: YES (/usr/bin/cat) 00:02:11.726 Project name: DPDK 00:02:11.726 Project version: 24.03.0 00:02:11.726 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.726 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.726 Host machine cpu family: x86_64 00:02:11.726 Host machine cpu: x86_64 00:02:11.726 Message: ## Building in Developer Mode ## 00:02:11.726 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.726 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.726 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.726 Program python3 found: YES (/usr/bin/python3) 00:02:11.726 Program cat found: YES (/usr/bin/cat) 00:02:11.726 Compiler for C supports arguments -march=native: YES 00:02:11.726 Checking for size of "void *" : 8 00:02:11.726 Checking for size of "void *" : 8 (cached) 00:02:11.726 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:11.726 Library m found: YES 00:02:11.726 Library numa found: YES 00:02:11.726 Has header "numaif.h" : YES 00:02:11.726 Library fdt found: NO 00:02:11.726 Library execinfo found: NO 00:02:11.726 Has header "execinfo.h" : YES 00:02:11.726 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.726 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.726 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.726 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.726 Run-time dependency openssl found: YES 3.1.1 00:02:11.726 Run-time dependency libpcap found: YES 1.10.4 00:02:11.726 Has header "pcap.h" with dependency libpcap: YES 00:02:11.727 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.727 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.727 Compiler for C supports arguments -Wformat: YES 00:02:11.727 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.727 Compiler for C supports arguments -Wformat-security: NO 00:02:11.727 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.727 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.727 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.727 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.727 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.727 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.727 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.727 Compiler for C supports arguments -Wundef: YES 00:02:11.727 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.727 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.727 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.727 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.727 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.727 Program objdump found: YES (/usr/bin/objdump) 00:02:11.727 Compiler for C supports arguments -mavx512f: YES 00:02:11.727 Checking if "AVX512 checking" compiles: YES 00:02:11.727 Fetching value of define "__SSE4_2__" : 1 00:02:11.727 Fetching value of define "__AES__" : 1 00:02:11.727 Fetching value of define "__AVX__" : 1 00:02:11.727 Fetching value of define "__AVX2__" : 1 00:02:11.727 Fetching value of define "__AVX512BW__" : 1 00:02:11.727 Fetching value of define "__AVX512CD__" : 1 00:02:11.727 Fetching value of define "__AVX512DQ__" : 1 00:02:11.727 Fetching value of define "__AVX512F__" : 1 00:02:11.727 Fetching value of define "__AVX512VL__" : 1 00:02:11.727 Fetching value of define "__PCLMUL__" : 1 00:02:11.727 Fetching value of define "__RDRND__" : 1 00:02:11.727 Fetching value of define "__RDSEED__" : 1 00:02:11.727 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.727 Fetching value of define "__znver1__" : (undefined) 00:02:11.727 Fetching value of define "__znver2__" : (undefined) 00:02:11.727 Fetching value of define "__znver3__" : (undefined) 00:02:11.727 Fetching value of define "__znver4__" : (undefined) 00:02:11.727 Library asan found: YES 00:02:11.727 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.727 Message: lib/log: Defining dependency "log" 00:02:11.727 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.727 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.727 Library rt found: YES 00:02:11.727 Checking for function "getentropy" : NO 00:02:11.727 Message: lib/eal: Defining dependency "eal" 00:02:11.727 Message: lib/ring: Defining dependency "ring" 00:02:11.727 Message: lib/rcu: Defining dependency "rcu" 00:02:11.727 Message: lib/mempool: Defining dependency "mempool" 00:02:11.727 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.727 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.727 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.727 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.727 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.727 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.727 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.727 Compiler for C supports arguments -mpclmul: YES 00:02:11.727 Compiler for C supports arguments -maes: YES 00:02:11.727 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.727 Compiler for C supports arguments -mavx512bw: YES 00:02:11.727 Compiler for C supports arguments -mavx512dq: YES 00:02:11.727 Compiler for C supports arguments -mavx512vl: YES 00:02:11.727 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.727 Compiler for C supports arguments -mavx2: YES 00:02:11.727 Compiler for C supports arguments -mavx: YES 00:02:11.727 Message: lib/net: Defining dependency "net" 00:02:11.727 Message: lib/meter: Defining dependency "meter" 00:02:11.727 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.727 Message: lib/pci: Defining dependency "pci" 00:02:11.727 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.727 Message: lib/hash: Defining dependency "hash" 00:02:11.727 Message: lib/timer: Defining dependency "timer" 00:02:11.727 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.727 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.727 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.727 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.727 Message: lib/power: Defining dependency "power" 00:02:11.727 Message: lib/reorder: Defining dependency "reorder" 00:02:11.727 Message: lib/security: Defining dependency "security" 00:02:11.727 Has header "linux/userfaultfd.h" : YES 00:02:11.727 Has header "linux/vduse.h" : YES 00:02:11.727 Message: lib/vhost: Defining dependency "vhost" 00:02:11.727 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.727 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.727 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.727 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.727 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.727 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.727 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.727 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.727 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.727 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.727 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.727 Configuring doxy-api-html.conf using configuration 00:02:11.727 Configuring doxy-api-man.conf using configuration 00:02:11.727 Program mandb found: YES (/usr/bin/mandb) 00:02:11.727 Program sphinx-build found: NO 00:02:11.727 Configuring rte_build_config.h using configuration 00:02:11.727 Message: 00:02:11.727 ================= 00:02:11.727 Applications Enabled 00:02:11.727 ================= 00:02:11.727 00:02:11.727 apps: 00:02:11.727 00:02:11.727 00:02:11.727 Message: 00:02:11.727 ================= 00:02:11.727 Libraries Enabled 00:02:11.727 ================= 00:02:11.727 00:02:11.727 libs: 00:02:11.727 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.727 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.727 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.727 00:02:11.727 Message: 00:02:11.727 =============== 00:02:11.727 Drivers Enabled 00:02:11.727 =============== 00:02:11.727 00:02:11.727 common: 00:02:11.727 00:02:11.727 bus: 00:02:11.727 pci, vdev, 00:02:11.727 mempool: 00:02:11.727 ring, 00:02:11.727 dma: 00:02:11.727 00:02:11.727 net: 00:02:11.727 00:02:11.727 crypto: 00:02:11.727 00:02:11.727 compress: 00:02:11.727 00:02:11.727 vdpa: 00:02:11.727 00:02:11.727 00:02:11.727 Message: 00:02:11.727 ================= 00:02:11.727 Content Skipped 00:02:11.727 ================= 00:02:11.727 00:02:11.727 apps: 00:02:11.727 dumpcap: explicitly disabled via build config 00:02:11.727 graph: explicitly disabled via build config 00:02:11.727 pdump: explicitly disabled via build config 00:02:11.727 proc-info: explicitly disabled via build config 00:02:11.727 test-acl: explicitly disabled via build config 00:02:11.727 test-bbdev: explicitly disabled via build config 00:02:11.727 test-cmdline: explicitly disabled via build config 00:02:11.727 test-compress-perf: explicitly disabled via build config 00:02:11.727 test-crypto-perf: explicitly disabled via build config 00:02:11.727 test-dma-perf: explicitly disabled via build config 00:02:11.727 test-eventdev: explicitly disabled via build config 00:02:11.727 test-fib: explicitly disabled via build config 00:02:11.727 test-flow-perf: explicitly disabled via build config 00:02:11.727 test-gpudev: explicitly disabled via build config 00:02:11.727 test-mldev: explicitly disabled via build config 00:02:11.727 test-pipeline: explicitly disabled via build config 00:02:11.727 test-pmd: explicitly disabled via build config 00:02:11.727 test-regex: explicitly disabled via build config 00:02:11.727 test-sad: explicitly disabled via build config 00:02:11.727 test-security-perf: explicitly disabled via build config 00:02:11.727 00:02:11.727 libs: 00:02:11.727 argparse: explicitly disabled via build config 00:02:11.727 metrics: explicitly disabled via build config 00:02:11.727 acl: explicitly disabled via build config 00:02:11.727 bbdev: explicitly disabled via build config 00:02:11.727 bitratestats: explicitly disabled via build config 00:02:11.727 bpf: explicitly disabled via build config 00:02:11.727 cfgfile: explicitly disabled via build config 00:02:11.727 distributor: explicitly disabled via build config 00:02:11.727 efd: explicitly disabled via build config 00:02:11.727 eventdev: explicitly disabled via build config 00:02:11.727 dispatcher: explicitly disabled via build config 00:02:11.727 gpudev: explicitly disabled via build config 00:02:11.727 gro: explicitly disabled via build config 00:02:11.727 gso: explicitly disabled via build config 00:02:11.727 ip_frag: explicitly disabled via build config 00:02:11.728 jobstats: explicitly disabled via build config 00:02:11.728 latencystats: explicitly disabled via build config 00:02:11.728 lpm: explicitly disabled via build config 00:02:11.728 member: explicitly disabled via build config 00:02:11.728 pcapng: explicitly disabled via build config 00:02:11.728 rawdev: explicitly disabled via build config 00:02:11.728 regexdev: explicitly disabled via build config 00:02:11.728 mldev: explicitly disabled via build config 00:02:11.728 rib: explicitly disabled via build config 00:02:11.728 sched: explicitly disabled via build config 00:02:11.728 stack: explicitly disabled via build config 00:02:11.728 ipsec: explicitly disabled via build config 00:02:11.728 pdcp: explicitly disabled via build config 00:02:11.728 fib: explicitly disabled via build config 00:02:11.728 port: explicitly disabled via build config 00:02:11.728 pdump: explicitly disabled via build config 00:02:11.728 table: explicitly disabled via build config 00:02:11.728 pipeline: explicitly disabled via build config 00:02:11.728 graph: explicitly disabled via build config 00:02:11.728 node: explicitly disabled via build config 00:02:11.728 00:02:11.728 drivers: 00:02:11.728 common/cpt: not in enabled drivers build config 00:02:11.728 common/dpaax: not in enabled drivers build config 00:02:11.728 common/iavf: not in enabled drivers build config 00:02:11.728 common/idpf: not in enabled drivers build config 00:02:11.728 common/ionic: not in enabled drivers build config 00:02:11.728 common/mvep: not in enabled drivers build config 00:02:11.728 common/octeontx: not in enabled drivers build config 00:02:11.728 bus/auxiliary: not in enabled drivers build config 00:02:11.728 bus/cdx: not in enabled drivers build config 00:02:11.728 bus/dpaa: not in enabled drivers build config 00:02:11.728 bus/fslmc: not in enabled drivers build config 00:02:11.728 bus/ifpga: not in enabled drivers build config 00:02:11.728 bus/platform: not in enabled drivers build config 00:02:11.728 bus/uacce: not in enabled drivers build config 00:02:11.728 bus/vmbus: not in enabled drivers build config 00:02:11.728 common/cnxk: not in enabled drivers build config 00:02:11.728 common/mlx5: not in enabled drivers build config 00:02:11.728 common/nfp: not in enabled drivers build config 00:02:11.728 common/nitrox: not in enabled drivers build config 00:02:11.728 common/qat: not in enabled drivers build config 00:02:11.728 common/sfc_efx: not in enabled drivers build config 00:02:11.728 mempool/bucket: not in enabled drivers build config 00:02:11.728 mempool/cnxk: not in enabled drivers build config 00:02:11.728 mempool/dpaa: not in enabled drivers build config 00:02:11.728 mempool/dpaa2: not in enabled drivers build config 00:02:11.728 mempool/octeontx: not in enabled drivers build config 00:02:11.728 mempool/stack: not in enabled drivers build config 00:02:11.728 dma/cnxk: not in enabled drivers build config 00:02:11.728 dma/dpaa: not in enabled drivers build config 00:02:11.728 dma/dpaa2: not in enabled drivers build config 00:02:11.728 dma/hisilicon: not in enabled drivers build config 00:02:11.728 dma/idxd: not in enabled drivers build config 00:02:11.728 dma/ioat: not in enabled drivers build config 00:02:11.728 dma/skeleton: not in enabled drivers build config 00:02:11.728 net/af_packet: not in enabled drivers build config 00:02:11.728 net/af_xdp: not in enabled drivers build config 00:02:11.728 net/ark: not in enabled drivers build config 00:02:11.728 net/atlantic: not in enabled drivers build config 00:02:11.728 net/avp: not in enabled drivers build config 00:02:11.728 net/axgbe: not in enabled drivers build config 00:02:11.728 net/bnx2x: not in enabled drivers build config 00:02:11.728 net/bnxt: not in enabled drivers build config 00:02:11.728 net/bonding: not in enabled drivers build config 00:02:11.728 net/cnxk: not in enabled drivers build config 00:02:11.728 net/cpfl: not in enabled drivers build config 00:02:11.728 net/cxgbe: not in enabled drivers build config 00:02:11.728 net/dpaa: not in enabled drivers build config 00:02:11.728 net/dpaa2: not in enabled drivers build config 00:02:11.728 net/e1000: not in enabled drivers build config 00:02:11.728 net/ena: not in enabled drivers build config 00:02:11.728 net/enetc: not in enabled drivers build config 00:02:11.728 net/enetfec: not in enabled drivers build config 00:02:11.728 net/enic: not in enabled drivers build config 00:02:11.728 net/failsafe: not in enabled drivers build config 00:02:11.728 net/fm10k: not in enabled drivers build config 00:02:11.728 net/gve: not in enabled drivers build config 00:02:11.728 net/hinic: not in enabled drivers build config 00:02:11.728 net/hns3: not in enabled drivers build config 00:02:11.728 net/i40e: not in enabled drivers build config 00:02:11.728 net/iavf: not in enabled drivers build config 00:02:11.728 net/ice: not in enabled drivers build config 00:02:11.728 net/idpf: not in enabled drivers build config 00:02:11.728 net/igc: not in enabled drivers build config 00:02:11.728 net/ionic: not in enabled drivers build config 00:02:11.728 net/ipn3ke: not in enabled drivers build config 00:02:11.728 net/ixgbe: not in enabled drivers build config 00:02:11.728 net/mana: not in enabled drivers build config 00:02:11.728 net/memif: not in enabled drivers build config 00:02:11.728 net/mlx4: not in enabled drivers build config 00:02:11.728 net/mlx5: not in enabled drivers build config 00:02:11.728 net/mvneta: not in enabled drivers build config 00:02:11.728 net/mvpp2: not in enabled drivers build config 00:02:11.728 net/netvsc: not in enabled drivers build config 00:02:11.728 net/nfb: not in enabled drivers build config 00:02:11.728 net/nfp: not in enabled drivers build config 00:02:11.728 net/ngbe: not in enabled drivers build config 00:02:11.728 net/null: not in enabled drivers build config 00:02:11.728 net/octeontx: not in enabled drivers build config 00:02:11.728 net/octeon_ep: not in enabled drivers build config 00:02:11.728 net/pcap: not in enabled drivers build config 00:02:11.728 net/pfe: not in enabled drivers build config 00:02:11.728 net/qede: not in enabled drivers build config 00:02:11.728 net/ring: not in enabled drivers build config 00:02:11.728 net/sfc: not in enabled drivers build config 00:02:11.728 net/softnic: not in enabled drivers build config 00:02:11.728 net/tap: not in enabled drivers build config 00:02:11.728 net/thunderx: not in enabled drivers build config 00:02:11.728 net/txgbe: not in enabled drivers build config 00:02:11.728 net/vdev_netvsc: not in enabled drivers build config 00:02:11.728 net/vhost: not in enabled drivers build config 00:02:11.728 net/virtio: not in enabled drivers build config 00:02:11.728 net/vmxnet3: not in enabled drivers build config 00:02:11.728 raw/*: missing internal dependency, "rawdev" 00:02:11.728 crypto/armv8: not in enabled drivers build config 00:02:11.728 crypto/bcmfs: not in enabled drivers build config 00:02:11.728 crypto/caam_jr: not in enabled drivers build config 00:02:11.728 crypto/ccp: not in enabled drivers build config 00:02:11.728 crypto/cnxk: not in enabled drivers build config 00:02:11.728 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.728 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.728 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.728 crypto/mlx5: not in enabled drivers build config 00:02:11.728 crypto/mvsam: not in enabled drivers build config 00:02:11.728 crypto/nitrox: not in enabled drivers build config 00:02:11.728 crypto/null: not in enabled drivers build config 00:02:11.728 crypto/octeontx: not in enabled drivers build config 00:02:11.728 crypto/openssl: not in enabled drivers build config 00:02:11.728 crypto/scheduler: not in enabled drivers build config 00:02:11.728 crypto/uadk: not in enabled drivers build config 00:02:11.728 crypto/virtio: not in enabled drivers build config 00:02:11.728 compress/isal: not in enabled drivers build config 00:02:11.728 compress/mlx5: not in enabled drivers build config 00:02:11.728 compress/nitrox: not in enabled drivers build config 00:02:11.728 compress/octeontx: not in enabled drivers build config 00:02:11.728 compress/zlib: not in enabled drivers build config 00:02:11.728 regex/*: missing internal dependency, "regexdev" 00:02:11.728 ml/*: missing internal dependency, "mldev" 00:02:11.728 vdpa/ifc: not in enabled drivers build config 00:02:11.728 vdpa/mlx5: not in enabled drivers build config 00:02:11.728 vdpa/nfp: not in enabled drivers build config 00:02:11.728 vdpa/sfc: not in enabled drivers build config 00:02:11.728 event/*: missing internal dependency, "eventdev" 00:02:11.728 baseband/*: missing internal dependency, "bbdev" 00:02:11.728 gpu/*: missing internal dependency, "gpudev" 00:02:11.728 00:02:11.728 00:02:11.728 Build targets in project: 85 00:02:11.728 00:02:11.728 DPDK 24.03.0 00:02:11.728 00:02:11.728 User defined options 00:02:11.728 buildtype : debug 00:02:11.728 default_library : shared 00:02:11.728 libdir : lib 00:02:11.728 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.728 b_sanitize : address 00:02:11.728 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.728 c_link_args : 00:02:11.728 cpu_instruction_set: native 00:02:11.728 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.728 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.728 enable_docs : false 00:02:11.729 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:11.729 enable_kmods : false 00:02:11.729 max_lcores : 128 00:02:11.729 tests : false 00:02:11.729 00:02:11.729 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.729 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.729 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.729 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.729 [3/268] Linking static target lib/librte_kvargs.a 00:02:11.729 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.729 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.729 [6/268] Linking static target lib/librte_log.a 00:02:11.729 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.729 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.729 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.729 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.729 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.729 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.729 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.729 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.729 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.729 [16/268] Linking static target lib/librte_telemetry.a 00:02:11.729 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.729 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.988 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.988 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.988 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.988 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.988 [23/268] Linking target lib/librte_log.so.24.1 00:02:11.988 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.988 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.988 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.246 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.246 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.246 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.505 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:12.505 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.505 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.505 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.505 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:12.505 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.505 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.505 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.505 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.764 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.764 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.764 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.764 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.764 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.764 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.764 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.023 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.023 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.023 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.023 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.282 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.282 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.282 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.282 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.282 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.282 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.541 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.541 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.541 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.800 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.800 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.800 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.800 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.800 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.800 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.800 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.059 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.059 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.059 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.318 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.318 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.318 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.318 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.318 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.318 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.576 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.577 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.577 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.577 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.577 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.835 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.835 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.835 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.836 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.836 [84/268] Linking static target lib/librte_ring.a 00:02:14.836 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.836 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.099 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.099 [88/268] Linking static target lib/librte_eal.a 00:02:15.099 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.099 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.099 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.099 [92/268] Linking static target lib/librte_rcu.a 00:02:15.099 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.099 [94/268] Linking static target lib/librte_mempool.a 00:02:15.365 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.365 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.365 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.365 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.624 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.624 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.624 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.624 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.624 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.624 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.883 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.883 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.883 [107/268] Linking static target lib/librte_meter.a 00:02:15.883 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.883 [109/268] Linking static target lib/librte_net.a 00:02:16.142 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.142 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.142 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.142 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.142 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.142 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.401 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.660 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.660 [118/268] Linking static target lib/librte_mbuf.a 00:02:16.660 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.660 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.918 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.918 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.918 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.177 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.177 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.177 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.177 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.177 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.177 [129/268] Linking static target lib/librte_pci.a 00:02:17.436 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.436 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.436 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.436 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.436 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.695 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.695 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.695 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.695 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.695 [139/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.695 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.695 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.695 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.695 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.695 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.695 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.695 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.695 [147/268] Linking static target lib/librte_cmdline.a 00:02:17.953 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.953 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.212 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.212 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.212 [152/268] Linking static target lib/librte_timer.a 00:02:18.212 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.472 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.472 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.472 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.472 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.732 [158/268] Linking static target lib/librte_compressdev.a 00:02:18.732 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.732 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.732 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.732 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.732 [163/268] Linking static target lib/librte_ethdev.a 00:02:18.732 [164/268] Linking static target lib/librte_hash.a 00:02:18.992 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.992 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.992 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.251 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.251 [169/268] Linking static target lib/librte_dmadev.a 00:02:19.251 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.251 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.251 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.251 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.511 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.511 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.821 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.821 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.821 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.821 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.821 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.080 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.080 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.080 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.080 [184/268] Linking static target lib/librte_cryptodev.a 00:02:20.080 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.080 [186/268] Linking static target lib/librte_power.a 00:02:20.340 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.340 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.340 [189/268] Linking static target lib/librte_reorder.a 00:02:20.340 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.599 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.599 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.599 [193/268] Linking static target lib/librte_security.a 00:02:20.859 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.859 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.118 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.118 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.377 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.377 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.377 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.637 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.637 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.897 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.897 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.897 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.897 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.897 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.897 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.158 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.158 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.158 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.158 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.419 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.419 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.419 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.419 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.419 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.419 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.419 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:22.679 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.679 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.679 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.679 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.679 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.679 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.679 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.938 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.316 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.253 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.253 [230/268] Linking target lib/librte_eal.so.24.1 00:02:25.253 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.253 [232/268] Linking target lib/librte_ring.so.24.1 00:02:25.253 [233/268] Linking target lib/librte_meter.so.24.1 00:02:25.253 [234/268] Linking target lib/librte_pci.so.24.1 00:02:25.253 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.513 [236/268] Linking target lib/librte_timer.so.24.1 00:02:25.513 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.513 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.513 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.513 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.513 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.513 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.513 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.513 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:25.513 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:25.772 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.772 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.772 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.772 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.772 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.032 [251/268] Linking target lib/librte_net.so.24.1 00:02:26.032 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.032 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:26.032 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.032 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.032 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.032 [257/268] Linking target lib/librte_hash.so.24.1 00:02:26.032 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.032 [259/268] Linking target lib/librte_security.so.24.1 00:02:26.291 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.229 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.229 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.229 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.489 [264/268] Linking target lib/librte_power.so.24.1 00:02:27.489 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:27.749 [266/268] Linking static target lib/librte_vhost.a 00:02:30.297 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.297 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.297 INFO: autodetecting backend as ninja 00:02:30.297 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:45.187 CC lib/ut/ut.o 00:02:45.187 CC lib/log/log_flags.o 00:02:45.187 CC lib/log/log.o 00:02:45.187 CC lib/log/log_deprecated.o 00:02:45.187 CC lib/ut_mock/mock.o 00:02:45.187 LIB libspdk_ut.a 00:02:45.187 LIB libspdk_ut_mock.a 00:02:45.187 LIB libspdk_log.a 00:02:45.187 SO libspdk_ut.so.2.0 00:02:45.187 SO libspdk_ut_mock.so.6.0 00:02:45.187 SYMLINK libspdk_ut.so 00:02:45.187 SO libspdk_log.so.7.1 00:02:45.187 SYMLINK libspdk_ut_mock.so 00:02:45.447 SYMLINK libspdk_log.so 00:02:45.707 CC lib/util/base64.o 00:02:45.707 CC lib/util/cpuset.o 00:02:45.707 CC lib/util/crc16.o 00:02:45.707 CC lib/util/bit_array.o 00:02:45.707 CC lib/util/crc32c.o 00:02:45.707 CC lib/util/crc32.o 00:02:45.707 CC lib/dma/dma.o 00:02:45.707 CC lib/ioat/ioat.o 00:02:45.707 CXX lib/trace_parser/trace.o 00:02:45.707 CC lib/util/crc32_ieee.o 00:02:45.707 CC lib/util/crc64.o 00:02:45.707 CC lib/util/dif.o 00:02:45.707 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.707 CC lib/util/fd.o 00:02:45.707 CC lib/util/fd_group.o 00:02:45.966 LIB libspdk_dma.a 00:02:45.966 CC lib/util/file.o 00:02:45.966 SO libspdk_dma.so.5.0 00:02:45.966 CC lib/vfio_user/host/vfio_user.o 00:02:45.966 CC lib/util/hexlify.o 00:02:45.966 SYMLINK libspdk_dma.so 00:02:45.966 CC lib/util/iov.o 00:02:45.966 CC lib/util/math.o 00:02:45.966 LIB libspdk_ioat.a 00:02:45.966 CC lib/util/net.o 00:02:45.966 SO libspdk_ioat.so.7.0 00:02:45.966 SYMLINK libspdk_ioat.so 00:02:45.966 CC lib/util/pipe.o 00:02:45.966 CC lib/util/strerror_tls.o 00:02:45.966 CC lib/util/string.o 00:02:45.966 CC lib/util/uuid.o 00:02:45.966 CC lib/util/xor.o 00:02:45.966 LIB libspdk_vfio_user.a 00:02:45.966 CC lib/util/zipf.o 00:02:45.966 SO libspdk_vfio_user.so.5.0 00:02:46.226 CC lib/util/md5.o 00:02:46.226 SYMLINK libspdk_vfio_user.so 00:02:46.486 LIB libspdk_util.a 00:02:46.486 SO libspdk_util.so.10.1 00:02:46.746 LIB libspdk_trace_parser.a 00:02:46.746 SYMLINK libspdk_util.so 00:02:46.746 SO libspdk_trace_parser.so.6.0 00:02:46.746 SYMLINK libspdk_trace_parser.so 00:02:46.746 CC lib/rdma_utils/rdma_utils.o 00:02:46.746 CC lib/conf/conf.o 00:02:46.746 CC lib/vmd/led.o 00:02:46.746 CC lib/vmd/vmd.o 00:02:46.746 CC lib/json/json_util.o 00:02:46.746 CC lib/json/json_parse.o 00:02:46.746 CC lib/json/json_write.o 00:02:46.746 CC lib/env_dpdk/env.o 00:02:47.006 CC lib/env_dpdk/memory.o 00:02:47.006 CC lib/idxd/idxd.o 00:02:47.006 CC lib/env_dpdk/pci.o 00:02:47.007 LIB libspdk_conf.a 00:02:47.007 CC lib/env_dpdk/init.o 00:02:47.007 CC lib/env_dpdk/threads.o 00:02:47.007 SO libspdk_conf.so.6.0 00:02:47.267 LIB libspdk_rdma_utils.a 00:02:47.267 SO libspdk_rdma_utils.so.1.0 00:02:47.267 LIB libspdk_json.a 00:02:47.267 SYMLINK libspdk_conf.so 00:02:47.267 CC lib/env_dpdk/pci_ioat.o 00:02:47.267 SO libspdk_json.so.6.0 00:02:47.267 SYMLINK libspdk_rdma_utils.so 00:02:47.267 CC lib/env_dpdk/pci_virtio.o 00:02:47.267 CC lib/env_dpdk/pci_vmd.o 00:02:47.267 SYMLINK libspdk_json.so 00:02:47.267 CC lib/env_dpdk/pci_idxd.o 00:02:47.267 CC lib/env_dpdk/pci_event.o 00:02:47.267 CC lib/env_dpdk/sigbus_handler.o 00:02:47.267 CC lib/env_dpdk/pci_dpdk.o 00:02:47.527 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.527 CC lib/idxd/idxd_user.o 00:02:47.527 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.527 CC lib/idxd/idxd_kernel.o 00:02:47.527 CC lib/rdma_provider/common.o 00:02:47.527 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:47.527 LIB libspdk_vmd.a 00:02:47.527 SO libspdk_vmd.so.6.0 00:02:47.787 SYMLINK libspdk_vmd.so 00:02:47.787 LIB libspdk_idxd.a 00:02:47.787 SO libspdk_idxd.so.12.1 00:02:47.787 LIB libspdk_rdma_provider.a 00:02:47.787 SO libspdk_rdma_provider.so.7.0 00:02:47.787 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.787 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.787 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.787 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.787 SYMLINK libspdk_idxd.so 00:02:47.787 SYMLINK libspdk_rdma_provider.so 00:02:48.046 LIB libspdk_jsonrpc.a 00:02:48.305 SO libspdk_jsonrpc.so.6.0 00:02:48.305 SYMLINK libspdk_jsonrpc.so 00:02:48.564 LIB libspdk_env_dpdk.a 00:02:48.564 SO libspdk_env_dpdk.so.15.1 00:02:48.824 CC lib/rpc/rpc.o 00:02:48.824 SYMLINK libspdk_env_dpdk.so 00:02:48.824 LIB libspdk_rpc.a 00:02:48.824 SO libspdk_rpc.so.6.0 00:02:49.084 SYMLINK libspdk_rpc.so 00:02:49.352 CC lib/keyring/keyring_rpc.o 00:02:49.352 CC lib/keyring/keyring.o 00:02:49.352 CC lib/notify/notify.o 00:02:49.352 CC lib/notify/notify_rpc.o 00:02:49.352 CC lib/trace/trace.o 00:02:49.352 CC lib/trace/trace_flags.o 00:02:49.352 CC lib/trace/trace_rpc.o 00:02:49.626 LIB libspdk_notify.a 00:02:49.626 SO libspdk_notify.so.6.0 00:02:49.626 LIB libspdk_keyring.a 00:02:49.626 LIB libspdk_trace.a 00:02:49.626 SYMLINK libspdk_notify.so 00:02:49.626 SO libspdk_keyring.so.2.0 00:02:49.626 SO libspdk_trace.so.11.0 00:02:49.626 SYMLINK libspdk_keyring.so 00:02:49.886 SYMLINK libspdk_trace.so 00:02:50.146 CC lib/thread/thread.o 00:02:50.146 CC lib/thread/iobuf.o 00:02:50.146 CC lib/sock/sock.o 00:02:50.146 CC lib/sock/sock_rpc.o 00:02:50.714 LIB libspdk_sock.a 00:02:50.714 SO libspdk_sock.so.10.0 00:02:50.714 SYMLINK libspdk_sock.so 00:02:51.284 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.284 CC lib/nvme/nvme_ctrlr.o 00:02:51.284 CC lib/nvme/nvme_fabric.o 00:02:51.284 CC lib/nvme/nvme_ns_cmd.o 00:02:51.284 CC lib/nvme/nvme_ns.o 00:02:51.284 CC lib/nvme/nvme_pcie_common.o 00:02:51.284 CC lib/nvme/nvme_pcie.o 00:02:51.284 CC lib/nvme/nvme.o 00:02:51.284 CC lib/nvme/nvme_qpair.o 00:02:51.852 LIB libspdk_thread.a 00:02:51.852 SO libspdk_thread.so.11.0 00:02:51.852 CC lib/nvme/nvme_quirks.o 00:02:51.852 CC lib/nvme/nvme_transport.o 00:02:51.852 SYMLINK libspdk_thread.so 00:02:51.852 CC lib/nvme/nvme_discovery.o 00:02:52.111 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.111 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.111 CC lib/nvme/nvme_tcp.o 00:02:52.111 CC lib/nvme/nvme_opal.o 00:02:52.111 CC lib/nvme/nvme_io_msg.o 00:02:52.371 CC lib/nvme/nvme_poll_group.o 00:02:52.371 CC lib/nvme/nvme_zns.o 00:02:52.630 CC lib/nvme/nvme_stubs.o 00:02:52.630 CC lib/nvme/nvme_auth.o 00:02:52.630 CC lib/accel/accel.o 00:02:52.630 CC lib/accel/accel_rpc.o 00:02:52.630 CC lib/nvme/nvme_cuse.o 00:02:52.888 CC lib/nvme/nvme_rdma.o 00:02:52.888 CC lib/accel/accel_sw.o 00:02:53.148 CC lib/init/json_config.o 00:02:53.148 CC lib/blob/blobstore.o 00:02:53.148 CC lib/blob/request.o 00:02:53.148 CC lib/virtio/virtio.o 00:02:53.407 CC lib/init/subsystem.o 00:02:53.407 CC lib/init/subsystem_rpc.o 00:02:53.407 CC lib/init/rpc.o 00:02:53.407 CC lib/virtio/virtio_vhost_user.o 00:02:53.667 CC lib/virtio/virtio_vfio_user.o 00:02:53.667 CC lib/virtio/virtio_pci.o 00:02:53.667 CC lib/blob/zeroes.o 00:02:53.667 LIB libspdk_init.a 00:02:53.667 SO libspdk_init.so.6.0 00:02:53.667 CC lib/blob/blob_bs_dev.o 00:02:53.667 SYMLINK libspdk_init.so 00:02:53.927 LIB libspdk_accel.a 00:02:53.927 SO libspdk_accel.so.16.0 00:02:53.927 LIB libspdk_virtio.a 00:02:53.927 SO libspdk_virtio.so.7.0 00:02:53.927 SYMLINK libspdk_accel.so 00:02:53.927 CC lib/event/app.o 00:02:53.927 CC lib/event/reactor.o 00:02:53.927 CC lib/event/log_rpc.o 00:02:53.927 CC lib/event/app_rpc.o 00:02:53.927 CC lib/fsdev/fsdev.o 00:02:53.927 CC lib/event/scheduler_static.o 00:02:53.927 SYMLINK libspdk_virtio.so 00:02:53.927 CC lib/fsdev/fsdev_io.o 00:02:54.187 CC lib/bdev/bdev.o 00:02:54.187 CC lib/bdev/bdev_rpc.o 00:02:54.187 CC lib/bdev/bdev_zone.o 00:02:54.187 CC lib/fsdev/fsdev_rpc.o 00:02:54.187 CC lib/bdev/part.o 00:02:54.447 CC lib/bdev/scsi_nvme.o 00:02:54.447 LIB libspdk_nvme.a 00:02:54.447 LIB libspdk_event.a 00:02:54.447 SO libspdk_event.so.14.0 00:02:54.707 SYMLINK libspdk_event.so 00:02:54.707 SO libspdk_nvme.so.15.0 00:02:54.707 LIB libspdk_fsdev.a 00:02:54.707 SO libspdk_fsdev.so.2.0 00:02:54.707 SYMLINK libspdk_fsdev.so 00:02:54.966 SYMLINK libspdk_nvme.so 00:02:55.227 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:55.796 LIB libspdk_fuse_dispatcher.a 00:02:56.055 SO libspdk_fuse_dispatcher.so.1.0 00:02:56.055 SYMLINK libspdk_fuse_dispatcher.so 00:02:56.624 LIB libspdk_blob.a 00:02:56.884 SO libspdk_blob.so.12.0 00:02:56.884 SYMLINK libspdk_blob.so 00:02:56.884 LIB libspdk_bdev.a 00:02:57.143 SO libspdk_bdev.so.17.0 00:02:57.143 SYMLINK libspdk_bdev.so 00:02:57.143 CC lib/blobfs/blobfs.o 00:02:57.143 CC lib/blobfs/tree.o 00:02:57.143 CC lib/lvol/lvol.o 00:02:57.402 CC lib/nvmf/ctrlr.o 00:02:57.402 CC lib/nvmf/ctrlr_discovery.o 00:02:57.402 CC lib/nvmf/ctrlr_bdev.o 00:02:57.402 CC lib/nbd/nbd.o 00:02:57.402 CC lib/scsi/dev.o 00:02:57.402 CC lib/ftl/ftl_core.o 00:02:57.402 CC lib/ublk/ublk.o 00:02:57.402 CC lib/ftl/ftl_init.o 00:02:57.660 CC lib/scsi/lun.o 00:02:57.660 CC lib/scsi/port.o 00:02:57.660 CC lib/nbd/nbd_rpc.o 00:02:57.660 CC lib/ftl/ftl_layout.o 00:02:57.919 CC lib/nvmf/subsystem.o 00:02:57.919 CC lib/scsi/scsi.o 00:02:57.919 CC lib/scsi/scsi_bdev.o 00:02:57.919 LIB libspdk_nbd.a 00:02:57.919 SO libspdk_nbd.so.7.0 00:02:57.919 SYMLINK libspdk_nbd.so 00:02:57.919 CC lib/scsi/scsi_pr.o 00:02:57.919 CC lib/scsi/scsi_rpc.o 00:02:58.176 CC lib/ublk/ublk_rpc.o 00:02:58.176 CC lib/ftl/ftl_debug.o 00:02:58.176 CC lib/scsi/task.o 00:02:58.176 CC lib/ftl/ftl_io.o 00:02:58.176 LIB libspdk_blobfs.a 00:02:58.176 SO libspdk_blobfs.so.11.0 00:02:58.176 LIB libspdk_ublk.a 00:02:58.176 SO libspdk_ublk.so.3.0 00:02:58.176 LIB libspdk_lvol.a 00:02:58.176 SYMLINK libspdk_blobfs.so 00:02:58.433 CC lib/nvmf/nvmf.o 00:02:58.433 SO libspdk_lvol.so.11.0 00:02:58.433 CC lib/nvmf/nvmf_rpc.o 00:02:58.433 CC lib/ftl/ftl_sb.o 00:02:58.433 SYMLINK libspdk_ublk.so 00:02:58.433 CC lib/ftl/ftl_l2p.o 00:02:58.433 CC lib/ftl/ftl_l2p_flat.o 00:02:58.433 SYMLINK libspdk_lvol.so 00:02:58.433 CC lib/ftl/ftl_nv_cache.o 00:02:58.433 CC lib/ftl/ftl_band.o 00:02:58.433 LIB libspdk_scsi.a 00:02:58.433 SO libspdk_scsi.so.9.0 00:02:58.433 CC lib/ftl/ftl_band_ops.o 00:02:58.691 CC lib/ftl/ftl_writer.o 00:02:58.691 CC lib/ftl/ftl_rq.o 00:02:58.691 SYMLINK libspdk_scsi.so 00:02:58.691 CC lib/ftl/ftl_reloc.o 00:02:58.691 CC lib/ftl/ftl_l2p_cache.o 00:02:58.949 CC lib/ftl/ftl_p2l.o 00:02:58.949 CC lib/ftl/ftl_p2l_log.o 00:02:58.949 CC lib/nvmf/transport.o 00:02:58.949 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.207 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.207 CC lib/nvmf/tcp.o 00:02:59.207 CC lib/nvmf/stubs.o 00:02:59.207 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.466 CC lib/iscsi/conn.o 00:02:59.466 CC lib/iscsi/init_grp.o 00:02:59.466 CC lib/iscsi/iscsi.o 00:02:59.466 CC lib/vhost/vhost.o 00:02:59.466 CC lib/nvmf/mdns_server.o 00:02:59.466 CC lib/nvmf/rdma.o 00:02:59.466 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.725 CC lib/iscsi/param.o 00:02:59.725 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:59.725 CC lib/nvmf/auth.o 00:02:59.725 CC lib/iscsi/portal_grp.o 00:02:59.984 CC lib/iscsi/tgt_node.o 00:02:59.984 CC lib/iscsi/iscsi_subsystem.o 00:02:59.984 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:59.984 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:59.984 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.242 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.242 CC lib/vhost/vhost_rpc.o 00:03:00.242 CC lib/vhost/vhost_scsi.o 00:03:00.500 CC lib/vhost/vhost_blk.o 00:03:00.500 CC lib/vhost/rte_vhost_user.o 00:03:00.500 CC lib/iscsi/iscsi_rpc.o 00:03:00.500 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.759 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.759 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.759 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.016 CC lib/ftl/utils/ftl_conf.o 00:03:01.016 CC lib/ftl/utils/ftl_md.o 00:03:01.016 CC lib/iscsi/task.o 00:03:01.016 CC lib/ftl/utils/ftl_mempool.o 00:03:01.274 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.274 CC lib/ftl/utils/ftl_property.o 00:03:01.274 LIB libspdk_iscsi.a 00:03:01.274 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.274 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.274 SO libspdk_iscsi.so.8.0 00:03:01.274 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.274 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.274 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.532 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.532 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.532 SYMLINK libspdk_iscsi.so 00:03:01.532 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.532 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.532 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.532 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.532 LIB libspdk_vhost.a 00:03:01.532 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:01.532 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:01.532 SO libspdk_vhost.so.8.0 00:03:01.532 CC lib/ftl/base/ftl_base_dev.o 00:03:01.532 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.790 CC lib/ftl/ftl_trace.o 00:03:01.790 SYMLINK libspdk_vhost.so 00:03:02.048 LIB libspdk_ftl.a 00:03:02.048 LIB libspdk_nvmf.a 00:03:02.306 SO libspdk_ftl.so.9.0 00:03:02.306 SO libspdk_nvmf.so.20.0 00:03:02.562 SYMLINK libspdk_nvmf.so 00:03:02.563 SYMLINK libspdk_ftl.so 00:03:02.820 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.079 CC module/fsdev/aio/fsdev_aio.o 00:03:03.079 CC module/accel/dsa/accel_dsa.o 00:03:03.079 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.079 CC module/accel/iaa/accel_iaa.o 00:03:03.079 CC module/sock/posix/posix.o 00:03:03.079 CC module/accel/error/accel_error.o 00:03:03.079 CC module/blob/bdev/blob_bdev.o 00:03:03.079 CC module/accel/ioat/accel_ioat.o 00:03:03.079 CC module/keyring/file/keyring.o 00:03:03.079 LIB libspdk_env_dpdk_rpc.a 00:03:03.079 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.079 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.079 CC module/accel/iaa/accel_iaa_rpc.o 00:03:03.079 CC module/keyring/file/keyring_rpc.o 00:03:03.079 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:03.079 LIB libspdk_scheduler_dynamic.a 00:03:03.079 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.337 CC module/accel/error/accel_error_rpc.o 00:03:03.337 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.337 LIB libspdk_accel_iaa.a 00:03:03.337 SO libspdk_accel_iaa.so.3.0 00:03:03.337 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.337 SYMLINK libspdk_scheduler_dynamic.so 00:03:03.337 LIB libspdk_keyring_file.a 00:03:03.337 CC module/fsdev/aio/linux_aio_mgr.o 00:03:03.337 LIB libspdk_accel_ioat.a 00:03:03.337 SYMLINK libspdk_accel_iaa.so 00:03:03.337 LIB libspdk_blob_bdev.a 00:03:03.337 SO libspdk_keyring_file.so.2.0 00:03:03.337 SO libspdk_blob_bdev.so.12.0 00:03:03.337 LIB libspdk_accel_error.a 00:03:03.337 SO libspdk_accel_ioat.so.6.0 00:03:03.337 SO libspdk_accel_error.so.2.0 00:03:03.337 SYMLINK libspdk_keyring_file.so 00:03:03.337 LIB libspdk_accel_dsa.a 00:03:03.337 SYMLINK libspdk_accel_ioat.so 00:03:03.337 SYMLINK libspdk_blob_bdev.so 00:03:03.337 SO libspdk_accel_dsa.so.5.0 00:03:03.337 SYMLINK libspdk_accel_error.so 00:03:03.337 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.595 SYMLINK libspdk_accel_dsa.so 00:03:03.595 CC module/keyring/linux/keyring.o 00:03:03.595 CC module/keyring/linux/keyring_rpc.o 00:03:03.595 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.595 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.595 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.595 LIB libspdk_keyring_linux.a 00:03:03.595 SO libspdk_keyring_linux.so.1.0 00:03:03.854 CC module/blobfs/bdev/blobfs_bdev.o 00:03:03.854 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.854 CC module/bdev/error/vbdev_error.o 00:03:03.854 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:03.854 LIB libspdk_scheduler_gscheduler.a 00:03:03.854 CC module/bdev/gpt/gpt.o 00:03:03.854 CC module/bdev/delay/vbdev_delay.o 00:03:03.854 LIB libspdk_fsdev_aio.a 00:03:03.854 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.854 SYMLINK libspdk_keyring_linux.so 00:03:03.854 LIB libspdk_sock_posix.a 00:03:03.854 SO libspdk_fsdev_aio.so.1.0 00:03:03.854 SO libspdk_sock_posix.so.6.0 00:03:03.854 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.854 CC module/bdev/gpt/vbdev_gpt.o 00:03:03.854 CC module/bdev/lvol/vbdev_lvol.o 00:03:03.854 SYMLINK libspdk_fsdev_aio.so 00:03:03.854 CC module/bdev/error/vbdev_error_rpc.o 00:03:03.854 SYMLINK libspdk_sock_posix.so 00:03:03.854 LIB libspdk_blobfs_bdev.a 00:03:03.854 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:03.854 SO libspdk_blobfs_bdev.so.6.0 00:03:04.113 CC module/bdev/malloc/bdev_malloc.o 00:03:04.113 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.113 SYMLINK libspdk_blobfs_bdev.so 00:03:04.113 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.113 LIB libspdk_bdev_error.a 00:03:04.113 SO libspdk_bdev_error.so.6.0 00:03:04.113 CC module/bdev/null/bdev_null.o 00:03:04.113 LIB libspdk_bdev_gpt.a 00:03:04.113 SO libspdk_bdev_gpt.so.6.0 00:03:04.113 CC module/bdev/nvme/bdev_nvme.o 00:03:04.113 LIB libspdk_bdev_delay.a 00:03:04.113 SYMLINK libspdk_bdev_error.so 00:03:04.372 SO libspdk_bdev_delay.so.6.0 00:03:04.372 SYMLINK libspdk_bdev_gpt.so 00:03:04.372 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.372 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.372 SYMLINK libspdk_bdev_delay.so 00:03:04.372 CC module/bdev/null/bdev_null_rpc.o 00:03:04.372 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.372 CC module/bdev/raid/bdev_raid.o 00:03:04.372 LIB libspdk_bdev_malloc.a 00:03:04.372 CC module/bdev/split/vbdev_split.o 00:03:04.372 SO libspdk_bdev_malloc.so.6.0 00:03:04.372 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.630 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.630 LIB libspdk_bdev_lvol.a 00:03:04.630 SO libspdk_bdev_lvol.so.6.0 00:03:04.630 SYMLINK libspdk_bdev_malloc.so 00:03:04.630 LIB libspdk_bdev_null.a 00:03:04.631 LIB libspdk_bdev_passthru.a 00:03:04.631 SO libspdk_bdev_null.so.6.0 00:03:04.631 SYMLINK libspdk_bdev_lvol.so 00:03:04.631 SO libspdk_bdev_passthru.so.6.0 00:03:04.631 SYMLINK libspdk_bdev_null.so 00:03:04.631 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.631 SYMLINK libspdk_bdev_passthru.so 00:03:04.631 CC module/bdev/aio/bdev_aio.o 00:03:04.889 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.889 CC module/bdev/ftl/bdev_ftl.o 00:03:04.889 LIB libspdk_bdev_split.a 00:03:04.889 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.889 SO libspdk_bdev_split.so.6.0 00:03:04.889 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.889 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.889 SYMLINK libspdk_bdev_split.so 00:03:04.889 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.148 LIB libspdk_bdev_zone_block.a 00:03:05.148 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.148 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.148 SO libspdk_bdev_zone_block.so.6.0 00:03:05.148 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.148 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.148 SYMLINK libspdk_bdev_zone_block.so 00:03:05.148 CC module/bdev/raid/raid0.o 00:03:05.148 CC module/bdev/nvme/nvme_rpc.o 00:03:05.466 CC module/bdev/raid/raid1.o 00:03:05.466 LIB libspdk_bdev_aio.a 00:03:05.466 LIB libspdk_bdev_iscsi.a 00:03:05.466 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.466 SO libspdk_bdev_aio.so.6.0 00:03:05.466 SO libspdk_bdev_iscsi.so.6.0 00:03:05.466 SYMLINK libspdk_bdev_aio.so 00:03:05.466 CC module/bdev/raid/concat.o 00:03:05.466 SYMLINK libspdk_bdev_iscsi.so 00:03:05.466 CC module/bdev/nvme/vbdev_opal.o 00:03:05.466 LIB libspdk_bdev_ftl.a 00:03:05.466 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.466 SO libspdk_bdev_ftl.so.6.0 00:03:05.466 CC module/bdev/raid/raid5f.o 00:03:05.466 LIB libspdk_bdev_virtio.a 00:03:05.466 SYMLINK libspdk_bdev_ftl.so 00:03:05.466 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.739 SO libspdk_bdev_virtio.so.6.0 00:03:05.739 SYMLINK libspdk_bdev_virtio.so 00:03:06.306 LIB libspdk_bdev_raid.a 00:03:06.306 SO libspdk_bdev_raid.so.6.0 00:03:06.306 SYMLINK libspdk_bdev_raid.so 00:03:07.683 LIB libspdk_bdev_nvme.a 00:03:07.683 SO libspdk_bdev_nvme.so.7.1 00:03:07.683 SYMLINK libspdk_bdev_nvme.so 00:03:08.250 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.250 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.250 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.250 CC module/event/subsystems/fsdev/fsdev.o 00:03:08.250 CC module/event/subsystems/keyring/keyring.o 00:03:08.250 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.250 CC module/event/subsystems/vmd/vmd.o 00:03:08.250 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.250 CC module/event/subsystems/sock/sock.o 00:03:08.509 LIB libspdk_event_scheduler.a 00:03:08.509 LIB libspdk_event_sock.a 00:03:08.509 LIB libspdk_event_keyring.a 00:03:08.509 LIB libspdk_event_fsdev.a 00:03:08.509 LIB libspdk_event_vhost_blk.a 00:03:08.509 LIB libspdk_event_vmd.a 00:03:08.509 SO libspdk_event_scheduler.so.4.0 00:03:08.509 SO libspdk_event_sock.so.5.0 00:03:08.509 LIB libspdk_event_iobuf.a 00:03:08.509 SO libspdk_event_vhost_blk.so.3.0 00:03:08.509 SO libspdk_event_fsdev.so.1.0 00:03:08.509 SO libspdk_event_keyring.so.1.0 00:03:08.509 SO libspdk_event_vmd.so.6.0 00:03:08.509 SO libspdk_event_iobuf.so.3.0 00:03:08.509 SYMLINK libspdk_event_sock.so 00:03:08.509 SYMLINK libspdk_event_scheduler.so 00:03:08.509 SYMLINK libspdk_event_vhost_blk.so 00:03:08.509 SYMLINK libspdk_event_fsdev.so 00:03:08.509 SYMLINK libspdk_event_keyring.so 00:03:08.509 SYMLINK libspdk_event_vmd.so 00:03:08.509 SYMLINK libspdk_event_iobuf.so 00:03:09.077 CC module/event/subsystems/accel/accel.o 00:03:09.077 LIB libspdk_event_accel.a 00:03:09.336 SO libspdk_event_accel.so.6.0 00:03:09.336 SYMLINK libspdk_event_accel.so 00:03:09.903 CC module/event/subsystems/bdev/bdev.o 00:03:09.903 LIB libspdk_event_bdev.a 00:03:09.903 SO libspdk_event_bdev.so.6.0 00:03:10.162 SYMLINK libspdk_event_bdev.so 00:03:10.421 CC module/event/subsystems/nbd/nbd.o 00:03:10.421 CC module/event/subsystems/scsi/scsi.o 00:03:10.421 CC module/event/subsystems/ublk/ublk.o 00:03:10.421 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.421 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.680 LIB libspdk_event_nbd.a 00:03:10.680 LIB libspdk_event_scsi.a 00:03:10.680 LIB libspdk_event_ublk.a 00:03:10.680 SO libspdk_event_nbd.so.6.0 00:03:10.680 SO libspdk_event_ublk.so.3.0 00:03:10.680 SO libspdk_event_scsi.so.6.0 00:03:10.680 SYMLINK libspdk_event_nbd.so 00:03:10.680 LIB libspdk_event_nvmf.a 00:03:10.680 SYMLINK libspdk_event_ublk.so 00:03:10.680 SYMLINK libspdk_event_scsi.so 00:03:10.680 SO libspdk_event_nvmf.so.6.0 00:03:10.939 SYMLINK libspdk_event_nvmf.so 00:03:11.197 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.197 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.197 LIB libspdk_event_vhost_scsi.a 00:03:11.456 LIB libspdk_event_iscsi.a 00:03:11.456 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.456 SO libspdk_event_iscsi.so.6.0 00:03:11.456 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.456 SYMLINK libspdk_event_iscsi.so 00:03:11.715 SO libspdk.so.6.0 00:03:11.715 SYMLINK libspdk.so 00:03:11.974 CC app/trace_record/trace_record.o 00:03:11.974 CXX app/trace/trace.o 00:03:12.233 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.233 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.233 CC app/nvmf_tgt/nvmf_main.o 00:03:12.233 CC app/spdk_tgt/spdk_tgt.o 00:03:12.233 CC examples/ioat/perf/perf.o 00:03:12.233 CC test/thread/poller_perf/poller_perf.o 00:03:12.233 CC examples/util/zipf/zipf.o 00:03:12.233 CC test/dma/test_dma/test_dma.o 00:03:12.233 LINK iscsi_tgt 00:03:12.233 LINK interrupt_tgt 00:03:12.233 LINK poller_perf 00:03:12.233 LINK zipf 00:03:12.490 LINK nvmf_tgt 00:03:12.490 LINK spdk_tgt 00:03:12.490 LINK spdk_trace_record 00:03:12.490 LINK ioat_perf 00:03:12.490 LINK spdk_trace 00:03:12.747 CC app/spdk_lspci/spdk_lspci.o 00:03:12.747 CC app/spdk_nvme_identify/identify.o 00:03:12.747 CC app/spdk_nvme_perf/perf.o 00:03:12.747 CC examples/ioat/verify/verify.o 00:03:12.747 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.747 CC app/spdk_top/spdk_top.o 00:03:12.747 CC app/spdk_dd/spdk_dd.o 00:03:12.747 LINK test_dma 00:03:12.747 CC test/app/bdev_svc/bdev_svc.o 00:03:12.747 LINK spdk_lspci 00:03:13.005 LINK verify 00:03:13.005 LINK bdev_svc 00:03:13.005 LINK spdk_nvme_discover 00:03:13.005 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.005 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.263 LINK spdk_dd 00:03:13.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.264 CC app/fio/nvme/fio_plugin.o 00:03:13.522 CC examples/thread/thread/thread_ex.o 00:03:13.522 CC app/vhost/vhost.o 00:03:13.522 CC test/app/histogram_perf/histogram_perf.o 00:03:13.522 LINK nvme_fuzz 00:03:13.782 LINK thread 00:03:13.782 LINK vhost 00:03:13.782 LINK spdk_nvme_perf 00:03:13.782 LINK vhost_fuzz 00:03:13.782 LINK histogram_perf 00:03:13.782 LINK spdk_nvme_identify 00:03:13.782 CC test/app/jsoncat/jsoncat.o 00:03:13.782 LINK spdk_top 00:03:14.040 LINK jsoncat 00:03:14.040 LINK spdk_nvme 00:03:14.040 CC test/app/stub/stub.o 00:03:14.040 CC examples/sock/hello_world/hello_sock.o 00:03:14.040 CC app/fio/bdev/fio_plugin.o 00:03:14.298 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.298 CC examples/idxd/perf/perf.o 00:03:14.298 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:14.298 CC examples/vmd/led/led.o 00:03:14.298 LINK stub 00:03:14.298 LINK lsvmd 00:03:14.298 CC examples/accel/perf/accel_perf.o 00:03:14.298 LINK hello_sock 00:03:14.298 LINK led 00:03:14.556 CC examples/blob/hello_world/hello_blob.o 00:03:14.556 LINK hello_fsdev 00:03:14.556 LINK idxd_perf 00:03:14.556 CC examples/blob/cli/blobcli.o 00:03:14.556 TEST_HEADER include/spdk/accel.h 00:03:14.556 TEST_HEADER include/spdk/accel_module.h 00:03:14.556 TEST_HEADER include/spdk/assert.h 00:03:14.556 TEST_HEADER include/spdk/barrier.h 00:03:14.556 TEST_HEADER include/spdk/base64.h 00:03:14.556 TEST_HEADER include/spdk/bdev.h 00:03:14.556 TEST_HEADER include/spdk/bdev_module.h 00:03:14.556 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.556 TEST_HEADER include/spdk/bit_array.h 00:03:14.556 TEST_HEADER include/spdk/bit_pool.h 00:03:14.556 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.556 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.556 TEST_HEADER include/spdk/blobfs.h 00:03:14.556 TEST_HEADER include/spdk/blob.h 00:03:14.556 TEST_HEADER include/spdk/conf.h 00:03:14.556 TEST_HEADER include/spdk/config.h 00:03:14.556 TEST_HEADER include/spdk/cpuset.h 00:03:14.556 TEST_HEADER include/spdk/crc16.h 00:03:14.556 TEST_HEADER include/spdk/crc32.h 00:03:14.556 TEST_HEADER include/spdk/crc64.h 00:03:14.556 TEST_HEADER include/spdk/dif.h 00:03:14.556 TEST_HEADER include/spdk/dma.h 00:03:14.556 TEST_HEADER include/spdk/endian.h 00:03:14.556 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.556 TEST_HEADER include/spdk/env.h 00:03:14.556 TEST_HEADER include/spdk/event.h 00:03:14.814 TEST_HEADER include/spdk/fd_group.h 00:03:14.814 TEST_HEADER include/spdk/fd.h 00:03:14.814 TEST_HEADER include/spdk/file.h 00:03:14.814 TEST_HEADER include/spdk/fsdev.h 00:03:14.814 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.814 LINK spdk_bdev 00:03:14.814 TEST_HEADER include/spdk/ftl.h 00:03:14.814 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.814 TEST_HEADER include/spdk/hexlify.h 00:03:14.814 TEST_HEADER include/spdk/histogram_data.h 00:03:14.814 TEST_HEADER include/spdk/idxd.h 00:03:14.814 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.814 TEST_HEADER include/spdk/init.h 00:03:14.814 TEST_HEADER include/spdk/ioat.h 00:03:14.814 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.814 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.814 TEST_HEADER include/spdk/json.h 00:03:14.814 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.814 TEST_HEADER include/spdk/keyring.h 00:03:14.814 TEST_HEADER include/spdk/keyring_module.h 00:03:14.814 TEST_HEADER include/spdk/likely.h 00:03:14.814 TEST_HEADER include/spdk/log.h 00:03:14.814 TEST_HEADER include/spdk/lvol.h 00:03:14.814 TEST_HEADER include/spdk/md5.h 00:03:14.814 TEST_HEADER include/spdk/memory.h 00:03:14.814 TEST_HEADER include/spdk/mmio.h 00:03:14.814 TEST_HEADER include/spdk/nbd.h 00:03:14.814 TEST_HEADER include/spdk/net.h 00:03:14.814 TEST_HEADER include/spdk/notify.h 00:03:14.814 TEST_HEADER include/spdk/nvme.h 00:03:14.814 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.814 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.814 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.814 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.814 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.814 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.814 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.814 TEST_HEADER include/spdk/nvmf.h 00:03:14.814 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.814 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.814 TEST_HEADER include/spdk/opal.h 00:03:14.814 TEST_HEADER include/spdk/opal_spec.h 00:03:14.814 TEST_HEADER include/spdk/pci_ids.h 00:03:14.814 TEST_HEADER include/spdk/pipe.h 00:03:14.814 TEST_HEADER include/spdk/queue.h 00:03:14.814 TEST_HEADER include/spdk/reduce.h 00:03:14.814 TEST_HEADER include/spdk/rpc.h 00:03:14.814 TEST_HEADER include/spdk/scheduler.h 00:03:14.814 TEST_HEADER include/spdk/scsi.h 00:03:14.814 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.814 TEST_HEADER include/spdk/sock.h 00:03:14.814 TEST_HEADER include/spdk/stdinc.h 00:03:14.814 TEST_HEADER include/spdk/string.h 00:03:14.814 TEST_HEADER include/spdk/thread.h 00:03:14.814 TEST_HEADER include/spdk/trace.h 00:03:14.814 CC examples/nvme/hello_world/hello_world.o 00:03:14.814 TEST_HEADER include/spdk/trace_parser.h 00:03:14.814 TEST_HEADER include/spdk/tree.h 00:03:14.814 TEST_HEADER include/spdk/ublk.h 00:03:14.814 TEST_HEADER include/spdk/util.h 00:03:14.814 TEST_HEADER include/spdk/uuid.h 00:03:14.814 TEST_HEADER include/spdk/version.h 00:03:14.814 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.814 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.814 TEST_HEADER include/spdk/vhost.h 00:03:14.814 TEST_HEADER include/spdk/vmd.h 00:03:14.814 TEST_HEADER include/spdk/xor.h 00:03:14.814 TEST_HEADER include/spdk/zipf.h 00:03:14.814 CXX test/cpp_headers/accel.o 00:03:14.814 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.814 LINK hello_blob 00:03:14.814 CC test/env/vtophys/vtophys.o 00:03:14.814 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.814 CC examples/nvme/reconnect/reconnect.o 00:03:15.072 LINK accel_perf 00:03:15.072 CXX test/cpp_headers/accel_module.o 00:03:15.072 LINK vtophys 00:03:15.072 LINK hello_world 00:03:15.072 LINK env_dpdk_post_init 00:03:15.072 CXX test/cpp_headers/assert.o 00:03:15.330 CXX test/cpp_headers/barrier.o 00:03:15.330 LINK blobcli 00:03:15.330 CXX test/cpp_headers/base64.o 00:03:15.330 LINK iscsi_fuzz 00:03:15.330 CXX test/cpp_headers/bdev.o 00:03:15.330 CXX test/cpp_headers/bdev_module.o 00:03:15.330 CC test/env/memory/memory_ut.o 00:03:15.330 LINK reconnect 00:03:15.330 CXX test/cpp_headers/bdev_zone.o 00:03:15.330 CXX test/cpp_headers/bit_array.o 00:03:15.330 CC test/env/pci/pci_ut.o 00:03:15.588 CXX test/cpp_headers/bit_pool.o 00:03:15.588 LINK mem_callbacks 00:03:15.588 CXX test/cpp_headers/blob_bdev.o 00:03:15.588 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.588 CXX test/cpp_headers/blobfs.o 00:03:15.588 CXX test/cpp_headers/blob.o 00:03:15.589 CXX test/cpp_headers/conf.o 00:03:15.589 CXX test/cpp_headers/config.o 00:03:15.847 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.847 CC examples/nvme/arbitration/arbitration.o 00:03:15.847 CXX test/cpp_headers/cpuset.o 00:03:15.847 CXX test/cpp_headers/crc16.o 00:03:15.847 CC examples/nvme/hotplug/hotplug.o 00:03:15.847 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.847 CC examples/nvme/abort/abort.o 00:03:15.847 CXX test/cpp_headers/crc32.o 00:03:15.847 CC test/event/event_perf/event_perf.o 00:03:15.847 LINK pci_ut 00:03:16.105 CC test/event/reactor/reactor.o 00:03:16.105 LINK hotplug 00:03:16.105 LINK cmb_copy 00:03:16.105 CXX test/cpp_headers/crc64.o 00:03:16.105 LINK arbitration 00:03:16.105 LINK event_perf 00:03:16.105 LINK reactor 00:03:16.363 CXX test/cpp_headers/dif.o 00:03:16.363 CXX test/cpp_headers/dma.o 00:03:16.363 CXX test/cpp_headers/endian.o 00:03:16.363 CXX test/cpp_headers/env_dpdk.o 00:03:16.363 CXX test/cpp_headers/env.o 00:03:16.363 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.363 LINK nvme_manage 00:03:16.363 LINK abort 00:03:16.363 CC test/event/reactor_perf/reactor_perf.o 00:03:16.363 CXX test/cpp_headers/event.o 00:03:16.621 LINK pmr_persistence 00:03:16.621 CC test/rpc_client/rpc_client_test.o 00:03:16.621 LINK reactor_perf 00:03:16.621 CC test/nvme/aer/aer.o 00:03:16.621 CC test/nvme/reset/reset.o 00:03:16.621 CXX test/cpp_headers/fd_group.o 00:03:16.621 CC test/event/app_repeat/app_repeat.o 00:03:16.621 CC test/accel/dif/dif.o 00:03:16.621 LINK memory_ut 00:03:16.621 CC test/blobfs/mkfs/mkfs.o 00:03:16.878 LINK rpc_client_test 00:03:16.878 CXX test/cpp_headers/fd.o 00:03:16.878 LINK app_repeat 00:03:16.878 CC test/nvme/sgl/sgl.o 00:03:16.878 LINK reset 00:03:16.878 LINK mkfs 00:03:16.878 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.878 LINK aer 00:03:17.136 CXX test/cpp_headers/file.o 00:03:17.136 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.136 CC test/event/scheduler/scheduler.o 00:03:17.136 LINK sgl 00:03:17.136 CXX test/cpp_headers/fsdev.o 00:03:17.136 CC test/lvol/esnap/esnap.o 00:03:17.136 LINK hello_bdev 00:03:17.136 CC test/nvme/e2edp/nvme_dp.o 00:03:17.393 CXX test/cpp_headers/fsdev_module.o 00:03:17.393 CC test/nvme/overhead/overhead.o 00:03:17.393 LINK scheduler 00:03:17.393 CXX test/cpp_headers/ftl.o 00:03:17.393 CXX test/cpp_headers/gpt_spec.o 00:03:17.393 CC test/nvme/err_injection/err_injection.o 00:03:17.651 CC test/nvme/startup/startup.o 00:03:17.651 LINK nvme_dp 00:03:17.651 LINK overhead 00:03:17.651 LINK dif 00:03:17.651 CXX test/cpp_headers/hexlify.o 00:03:17.651 CXX test/cpp_headers/histogram_data.o 00:03:17.651 LINK err_injection 00:03:17.651 LINK startup 00:03:17.909 CC test/nvme/reserve/reserve.o 00:03:17.909 CXX test/cpp_headers/idxd.o 00:03:17.909 CXX test/cpp_headers/idxd_spec.o 00:03:17.909 CC test/nvme/simple_copy/simple_copy.o 00:03:17.909 CC test/nvme/connect_stress/connect_stress.o 00:03:17.909 CC test/nvme/boot_partition/boot_partition.o 00:03:17.909 CC test/nvme/compliance/nvme_compliance.o 00:03:17.909 CXX test/cpp_headers/init.o 00:03:18.167 LINK reserve 00:03:18.167 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.167 LINK connect_stress 00:03:18.167 LINK simple_copy 00:03:18.167 LINK boot_partition 00:03:18.167 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.167 CXX test/cpp_headers/ioat.o 00:03:18.167 LINK bdevperf 00:03:18.425 LINK fused_ordering 00:03:18.425 CXX test/cpp_headers/ioat_spec.o 00:03:18.425 LINK doorbell_aers 00:03:18.425 CC test/nvme/fdp/fdp.o 00:03:18.425 CXX test/cpp_headers/iscsi_spec.o 00:03:18.425 LINK nvme_compliance 00:03:18.425 CC test/nvme/cuse/cuse.o 00:03:18.425 CXX test/cpp_headers/json.o 00:03:18.683 CXX test/cpp_headers/jsonrpc.o 00:03:18.683 CXX test/cpp_headers/keyring.o 00:03:18.683 CXX test/cpp_headers/keyring_module.o 00:03:18.683 CC test/bdev/bdevio/bdevio.o 00:03:18.683 CXX test/cpp_headers/likely.o 00:03:18.683 CXX test/cpp_headers/log.o 00:03:18.683 CC examples/nvmf/nvmf/nvmf.o 00:03:18.683 CXX test/cpp_headers/lvol.o 00:03:18.683 CXX test/cpp_headers/md5.o 00:03:18.942 LINK fdp 00:03:18.942 CXX test/cpp_headers/memory.o 00:03:18.942 CXX test/cpp_headers/mmio.o 00:03:18.942 CXX test/cpp_headers/nbd.o 00:03:18.942 CXX test/cpp_headers/net.o 00:03:18.942 CXX test/cpp_headers/notify.o 00:03:18.942 CXX test/cpp_headers/nvme.o 00:03:18.942 CXX test/cpp_headers/nvme_intel.o 00:03:18.942 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.942 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.199 LINK nvmf 00:03:19.199 CXX test/cpp_headers/nvme_spec.o 00:03:19.199 CXX test/cpp_headers/nvme_zns.o 00:03:19.199 LINK bdevio 00:03:19.199 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.199 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.199 CXX test/cpp_headers/nvmf.o 00:03:19.199 CXX test/cpp_headers/nvmf_spec.o 00:03:19.199 CXX test/cpp_headers/nvmf_transport.o 00:03:19.199 CXX test/cpp_headers/opal.o 00:03:19.199 CXX test/cpp_headers/opal_spec.o 00:03:19.457 CXX test/cpp_headers/pci_ids.o 00:03:19.457 CXX test/cpp_headers/pipe.o 00:03:19.457 CXX test/cpp_headers/queue.o 00:03:19.457 CXX test/cpp_headers/reduce.o 00:03:19.457 CXX test/cpp_headers/rpc.o 00:03:19.457 CXX test/cpp_headers/scheduler.o 00:03:19.457 CXX test/cpp_headers/scsi.o 00:03:19.457 CXX test/cpp_headers/scsi_spec.o 00:03:19.457 CXX test/cpp_headers/sock.o 00:03:19.457 CXX test/cpp_headers/stdinc.o 00:03:19.716 CXX test/cpp_headers/string.o 00:03:19.716 CXX test/cpp_headers/thread.o 00:03:19.716 CXX test/cpp_headers/trace.o 00:03:19.716 CXX test/cpp_headers/trace_parser.o 00:03:19.716 CXX test/cpp_headers/tree.o 00:03:19.716 CXX test/cpp_headers/ublk.o 00:03:19.716 CXX test/cpp_headers/util.o 00:03:19.716 CXX test/cpp_headers/uuid.o 00:03:19.716 CXX test/cpp_headers/version.o 00:03:19.716 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.716 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.716 CXX test/cpp_headers/vhost.o 00:03:19.716 CXX test/cpp_headers/vmd.o 00:03:19.973 CXX test/cpp_headers/xor.o 00:03:19.973 CXX test/cpp_headers/zipf.o 00:03:20.231 LINK cuse 00:03:24.419 LINK esnap 00:03:24.419 00:03:24.419 real 1m24.766s 00:03:24.419 user 7m25.304s 00:03:24.419 sys 1m42.125s 00:03:24.419 19:32:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.419 19:32:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.419 ************************************ 00:03:24.419 END TEST make 00:03:24.419 ************************************ 00:03:24.419 19:32:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.419 19:32:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.419 19:32:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.419 19:32:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.419 19:32:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.419 19:32:07 -- pm/common@44 -- $ pid=5471 00:03:24.419 19:32:07 -- pm/common@50 -- $ kill -TERM 5471 00:03:24.419 19:32:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.419 19:32:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.419 19:32:07 -- pm/common@44 -- $ pid=5472 00:03:24.419 19:32:07 -- pm/common@50 -- $ kill -TERM 5472 00:03:24.419 19:32:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.419 19:32:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.419 19:32:07 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.419 19:32:07 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.419 19:32:07 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:24.678 19:32:07 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:24.678 19:32:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.678 19:32:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.678 19:32:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.678 19:32:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.678 19:32:07 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.678 19:32:07 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.678 19:32:07 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.678 19:32:07 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.678 19:32:07 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.678 19:32:07 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.678 19:32:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.678 19:32:07 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.678 19:32:07 -- scripts/common.sh@345 -- # : 1 00:03:24.678 19:32:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.678 19:32:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.678 19:32:07 -- scripts/common.sh@365 -- # decimal 1 00:03:24.678 19:32:07 -- scripts/common.sh@353 -- # local d=1 00:03:24.678 19:32:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.678 19:32:07 -- scripts/common.sh@355 -- # echo 1 00:03:24.678 19:32:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.678 19:32:07 -- scripts/common.sh@366 -- # decimal 2 00:03:24.678 19:32:07 -- scripts/common.sh@353 -- # local d=2 00:03:24.678 19:32:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.678 19:32:07 -- scripts/common.sh@355 -- # echo 2 00:03:24.678 19:32:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.678 19:32:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.678 19:32:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.678 19:32:07 -- scripts/common.sh@368 -- # return 0 00:03:24.678 19:32:07 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.678 19:32:07 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.678 --rc genhtml_branch_coverage=1 00:03:24.678 --rc genhtml_function_coverage=1 00:03:24.678 --rc genhtml_legend=1 00:03:24.678 --rc geninfo_all_blocks=1 00:03:24.678 --rc geninfo_unexecuted_blocks=1 00:03:24.678 00:03:24.678 ' 00:03:24.678 19:32:07 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.678 --rc genhtml_branch_coverage=1 00:03:24.678 --rc genhtml_function_coverage=1 00:03:24.678 --rc genhtml_legend=1 00:03:24.678 --rc geninfo_all_blocks=1 00:03:24.678 --rc geninfo_unexecuted_blocks=1 00:03:24.678 00:03:24.678 ' 00:03:24.678 19:32:07 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.678 --rc genhtml_branch_coverage=1 00:03:24.678 --rc genhtml_function_coverage=1 00:03:24.678 --rc genhtml_legend=1 00:03:24.678 --rc geninfo_all_blocks=1 00:03:24.678 --rc geninfo_unexecuted_blocks=1 00:03:24.678 00:03:24.678 ' 00:03:24.678 19:32:07 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.678 --rc genhtml_branch_coverage=1 00:03:24.678 --rc genhtml_function_coverage=1 00:03:24.678 --rc genhtml_legend=1 00:03:24.678 --rc geninfo_all_blocks=1 00:03:24.678 --rc geninfo_unexecuted_blocks=1 00:03:24.678 00:03:24.678 ' 00:03:24.678 19:32:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:24.678 19:32:07 -- nvmf/common.sh@7 -- # uname -s 00:03:24.678 19:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.678 19:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.678 19:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.678 19:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.678 19:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.678 19:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.678 19:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.678 19:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.678 19:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.678 19:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.678 19:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:03:24.678 19:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:03:24.678 19:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.678 19:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.678 19:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:24.678 19:32:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.678 19:32:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:24.678 19:32:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.678 19:32:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.678 19:32:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.678 19:32:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.678 19:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.678 19:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.678 19:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.678 19:32:07 -- paths/export.sh@5 -- # export PATH 00:03:24.678 19:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.678 19:32:07 -- nvmf/common.sh@51 -- # : 0 00:03:24.678 19:32:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.678 19:32:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.678 19:32:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.678 19:32:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.678 19:32:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.678 19:32:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.678 19:32:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.678 19:32:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.678 19:32:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.678 19:32:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.678 19:32:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.678 19:32:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.678 19:32:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.678 19:32:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.678 19:32:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.678 19:32:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.678 19:32:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.678 19:32:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.678 19:32:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.678 19:32:07 -- spdk/autotest.sh@48 -- # udevadm_pid=56209 00:03:24.678 19:32:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.678 19:32:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.678 19:32:07 -- pm/common@17 -- # local monitor 00:03:24.678 19:32:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.678 19:32:07 -- pm/common@21 -- # date +%s 00:03:24.678 19:32:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.678 19:32:07 -- pm/common@25 -- # sleep 1 00:03:24.678 19:32:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734031927 00:03:24.678 19:32:07 -- pm/common@21 -- # date +%s 00:03:24.678 19:32:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734031927 00:03:24.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734031927_collect-cpu-load.pm.log 00:03:24.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734031927_collect-vmstat.pm.log 00:03:25.614 19:32:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.614 19:32:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.614 19:32:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.614 19:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:25.614 19:32:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.614 19:32:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:25.614 19:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:25.873 19:32:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:25.873 19:32:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:25.873 19:32:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:25.873 19:32:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.873 19:32:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:25.873 19:32:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.873 19:32:08 -- common/autotest_common.sh@1457 -- # uname 00:03:25.873 19:32:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:25.873 19:32:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.873 19:32:08 -- common/autotest_common.sh@1477 -- # uname 00:03:25.873 19:32:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:25.873 19:32:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:25.873 19:32:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.873 lcov: LCOV version 1.15 00:03:25.873 19:32:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.766 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:40.766 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.858 19:32:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:58.858 19:32:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.858 19:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:58.858 19:32:38 -- spdk/autotest.sh@78 -- # rm -f 00:03:58.858 19:32:38 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.858 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:58.858 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:58.858 19:32:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:58.858 19:32:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:58.858 19:32:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:58.858 19:32:39 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:58.858 19:32:39 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:58.858 19:32:39 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:58.858 19:32:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:58.859 19:32:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:58.859 19:32:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:58.859 19:32:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:58.859 19:32:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:58.859 19:32:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:58.859 19:32:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:58.859 19:32:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:58.859 19:32:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:58.859 19:32:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:58.859 19:32:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:58.859 19:32:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:58.859 19:32:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.859 19:32:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:58.859 19:32:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.859 19:32:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.859 19:32:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:58.859 19:32:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:58.859 19:32:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.859 No valid GPT data, bailing 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # pt= 00:03:58.859 19:32:39 -- scripts/common.sh@395 -- # return 1 00:03:58.859 19:32:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.859 1+0 records in 00:03:58.859 1+0 records out 00:03:58.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636391 s, 165 MB/s 00:03:58.859 19:32:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.859 19:32:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.859 19:32:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:58.859 19:32:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:58.859 19:32:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:58.859 No valid GPT data, bailing 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # pt= 00:03:58.859 19:32:39 -- scripts/common.sh@395 -- # return 1 00:03:58.859 19:32:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:58.859 1+0 records in 00:03:58.859 1+0 records out 00:03:58.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642215 s, 163 MB/s 00:03:58.859 19:32:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.859 19:32:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.859 19:32:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:58.859 19:32:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:58.859 19:32:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:58.859 No valid GPT data, bailing 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:58.859 19:32:39 -- scripts/common.sh@394 -- # pt= 00:03:58.859 19:32:39 -- scripts/common.sh@395 -- # return 1 00:03:58.859 19:32:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:58.859 1+0 records in 00:03:58.859 1+0 records out 00:03:58.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625497 s, 168 MB/s 00:03:58.859 19:32:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.859 19:32:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.859 19:32:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:58.859 19:32:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:58.859 19:32:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:58.859 No valid GPT data, bailing 00:03:58.859 19:32:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:58.859 19:32:40 -- scripts/common.sh@394 -- # pt= 00:03:58.859 19:32:40 -- scripts/common.sh@395 -- # return 1 00:03:58.859 19:32:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:58.859 1+0 records in 00:03:58.859 1+0 records out 00:03:58.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698128 s, 150 MB/s 00:03:58.859 19:32:40 -- spdk/autotest.sh@105 -- # sync 00:03:58.859 19:32:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.859 19:32:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.859 19:32:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.239 19:32:42 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.239 19:32:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.239 19:32:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.239 19:32:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.808 Hugepages 00:04:00.808 node hugesize free / total 00:04:00.808 node0 1048576kB 0 / 0 00:04:00.808 node0 2048kB 0 / 0 00:04:00.808 00:04:00.808 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.067 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.067 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:01.326 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:01.326 19:32:43 -- spdk/autotest.sh@117 -- # uname -s 00:04:01.326 19:32:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:01.326 19:32:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:01.326 19:32:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.262 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.262 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.262 19:32:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:03.213 19:32:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:03.213 19:32:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:03.213 19:32:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.213 19:32:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:03.213 19:32:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.213 19:32:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.213 19:32:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.213 19:32:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.213 19:32:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.473 19:32:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:03.473 19:32:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.473 19:32:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.989 Waiting for block devices as requested 00:04:03.989 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.989 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.249 19:32:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.249 19:32:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.249 19:32:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.249 19:32:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.249 19:32:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1543 -- # continue 00:04:04.249 19:32:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.249 19:32:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.249 19:32:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.249 19:32:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.249 19:32:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.249 19:32:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.249 19:32:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.249 19:32:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.249 19:32:46 -- common/autotest_common.sh@1543 -- # continue 00:04:04.249 19:32:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:04.249 19:32:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.249 19:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:04.249 19:32:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:04.249 19:32:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.249 19:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:04.249 19:32:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.186 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.186 19:32:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.186 19:32:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.186 19:32:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.445 19:32:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.445 19:32:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:05.445 19:32:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.445 19:32:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:05.445 19:32:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:05.445 19:32:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:05.445 19:32:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.445 19:32:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:05.445 19:32:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.445 19:32:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.445 19:32:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.445 19:32:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.445 19:32:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.445 19:32:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.445 19:32:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.445 19:32:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.445 19:32:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.445 19:32:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.445 19:32:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.445 19:32:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.445 19:32:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.445 19:32:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.445 19:32:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.445 19:32:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:05.445 19:32:48 -- common/autotest_common.sh@1572 -- # return 0 00:04:05.445 19:32:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:05.445 19:32:48 -- common/autotest_common.sh@1580 -- # return 0 00:04:05.445 19:32:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.445 19:32:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.445 19:32:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.445 19:32:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.445 19:32:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.445 19:32:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.445 19:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:05.445 19:32:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:05.445 19:32:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.445 19:32:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.445 19:32:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.445 19:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:05.445 ************************************ 00:04:05.445 START TEST env 00:04:05.445 ************************************ 00:04:05.445 19:32:48 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.445 * Looking for test storage... 00:04:05.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.705 19:32:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.705 19:32:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.705 19:32:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.705 19:32:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.705 19:32:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.705 19:32:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.705 19:32:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.705 19:32:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.705 19:32:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.705 19:32:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.705 19:32:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.705 19:32:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:05.705 19:32:48 env -- scripts/common.sh@345 -- # : 1 00:04:05.705 19:32:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.705 19:32:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.705 19:32:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:05.705 19:32:48 env -- scripts/common.sh@353 -- # local d=1 00:04:05.705 19:32:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.705 19:32:48 env -- scripts/common.sh@355 -- # echo 1 00:04:05.705 19:32:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.705 19:32:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:05.705 19:32:48 env -- scripts/common.sh@353 -- # local d=2 00:04:05.705 19:32:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.705 19:32:48 env -- scripts/common.sh@355 -- # echo 2 00:04:05.705 19:32:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.705 19:32:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.705 19:32:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.705 19:32:48 env -- scripts/common.sh@368 -- # return 0 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.705 --rc genhtml_branch_coverage=1 00:04:05.705 --rc genhtml_function_coverage=1 00:04:05.705 --rc genhtml_legend=1 00:04:05.705 --rc geninfo_all_blocks=1 00:04:05.705 --rc geninfo_unexecuted_blocks=1 00:04:05.705 00:04:05.705 ' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.705 --rc genhtml_branch_coverage=1 00:04:05.705 --rc genhtml_function_coverage=1 00:04:05.705 --rc genhtml_legend=1 00:04:05.705 --rc geninfo_all_blocks=1 00:04:05.705 --rc geninfo_unexecuted_blocks=1 00:04:05.705 00:04:05.705 ' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.705 --rc genhtml_branch_coverage=1 00:04:05.705 --rc genhtml_function_coverage=1 00:04:05.705 --rc genhtml_legend=1 00:04:05.705 --rc geninfo_all_blocks=1 00:04:05.705 --rc geninfo_unexecuted_blocks=1 00:04:05.705 00:04:05.705 ' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.705 --rc genhtml_branch_coverage=1 00:04:05.705 --rc genhtml_function_coverage=1 00:04:05.705 --rc genhtml_legend=1 00:04:05.705 --rc geninfo_all_blocks=1 00:04:05.705 --rc geninfo_unexecuted_blocks=1 00:04:05.705 00:04:05.705 ' 00:04:05.705 19:32:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.705 19:32:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.705 19:32:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.705 ************************************ 00:04:05.705 START TEST env_memory 00:04:05.705 ************************************ 00:04:05.705 19:32:48 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.705 00:04:05.705 00:04:05.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.705 http://cunit.sourceforge.net/ 00:04:05.705 00:04:05.705 00:04:05.705 Suite: memory 00:04:05.705 Test: alloc and free memory map ...[2024-12-12 19:32:48.479785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.705 passed 00:04:05.705 Test: mem map translation ...[2024-12-12 19:32:48.528456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.705 [2024-12-12 19:32:48.528557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.705 [2024-12-12 19:32:48.528641] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.705 [2024-12-12 19:32:48.528673] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.964 passed 00:04:05.964 Test: mem map registration ...[2024-12-12 19:32:48.605807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:05.965 [2024-12-12 19:32:48.605889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:05.965 passed 00:04:05.965 Test: mem map adjacent registrations ...passed 00:04:05.965 00:04:05.965 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.965 suites 1 1 n/a 0 0 00:04:05.965 tests 4 4 4 0 0 00:04:05.965 asserts 152 152 152 0 n/a 00:04:05.965 00:04:05.965 Elapsed time = 0.271 seconds 00:04:05.965 00:04:05.965 real 0m0.325s 00:04:05.965 user 0m0.279s 00:04:05.965 sys 0m0.035s 00:04:05.965 19:32:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.965 19:32:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.965 ************************************ 00:04:05.965 END TEST env_memory 00:04:05.965 ************************************ 00:04:05.965 19:32:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.965 19:32:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.965 19:32:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.965 19:32:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.965 ************************************ 00:04:05.965 START TEST env_vtophys 00:04:05.965 ************************************ 00:04:05.965 19:32:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.224 EAL: lib.eal log level changed from notice to debug 00:04:06.224 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 1 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 2 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 3 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 4 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 5 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 6 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 7 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 8 as core 0 on socket 0 00:04:06.224 EAL: Detected lcore 9 as core 0 on socket 0 00:04:06.224 EAL: Maximum logical cores by configuration: 128 00:04:06.224 EAL: Detected CPU lcores: 10 00:04:06.224 EAL: Detected NUMA nodes: 1 00:04:06.224 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.224 EAL: Detected shared linkage of DPDK 00:04:06.224 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.224 EAL: Selected IOVA mode 'PA' 00:04:06.224 EAL: Probing VFIO support... 00:04:06.224 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.224 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:06.224 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.224 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.224 EAL: Setting up physically contiguous memory... 00:04:06.224 EAL: Setting maximum number of open files to 524288 00:04:06.224 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.224 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.224 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.224 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.224 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.224 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.224 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.224 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.224 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.224 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.224 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.224 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.224 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.224 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.224 EAL: Hugepages will be freed exactly as allocated. 00:04:06.224 EAL: No shared files mode enabled, IPC is disabled 00:04:06.224 EAL: No shared files mode enabled, IPC is disabled 00:04:06.224 EAL: TSC frequency is ~2290000 KHz 00:04:06.224 EAL: Main lcore 0 is ready (tid=7fcd5bc0da40;cpuset=[0]) 00:04:06.224 EAL: Trying to obtain current memory policy. 00:04:06.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.224 EAL: Restoring previous memory policy: 0 00:04:06.224 EAL: request: mp_malloc_sync 00:04:06.224 EAL: No shared files mode enabled, IPC is disabled 00:04:06.224 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.225 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.225 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.225 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.225 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:06.225 00:04:06.225 00:04:06.225 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.225 http://cunit.sourceforge.net/ 00:04:06.225 00:04:06.225 00:04:06.225 Suite: components_suite 00:04:06.792 Test: vtophys_malloc_test ...passed 00:04:06.792 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.792 EAL: Restoring previous memory policy: 4 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.792 EAL: Trying to obtain current memory policy. 00:04:06.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.792 EAL: Restoring previous memory policy: 4 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.792 EAL: Trying to obtain current memory policy. 00:04:06.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.792 EAL: Restoring previous memory policy: 4 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.792 EAL: Trying to obtain current memory policy. 00:04:06.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.792 EAL: Restoring previous memory policy: 4 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.792 EAL: Trying to obtain current memory policy. 00:04:06.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.792 EAL: Restoring previous memory policy: 4 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.792 EAL: request: mp_malloc_sync 00:04:06.792 EAL: No shared files mode enabled, IPC is disabled 00:04:06.792 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.050 EAL: Trying to obtain current memory policy. 00:04:07.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.050 EAL: Restoring previous memory policy: 4 00:04:07.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.050 EAL: request: mp_malloc_sync 00:04:07.050 EAL: No shared files mode enabled, IPC is disabled 00:04:07.050 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.050 EAL: request: mp_malloc_sync 00:04:07.050 EAL: No shared files mode enabled, IPC is disabled 00:04:07.050 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.309 EAL: Trying to obtain current memory policy. 00:04:07.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.309 EAL: Restoring previous memory policy: 4 00:04:07.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.309 EAL: request: mp_malloc_sync 00:04:07.309 EAL: No shared files mode enabled, IPC is disabled 00:04:07.309 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.568 EAL: request: mp_malloc_sync 00:04:07.568 EAL: No shared files mode enabled, IPC is disabled 00:04:07.568 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.827 EAL: Trying to obtain current memory policy. 00:04:07.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.827 EAL: Restoring previous memory policy: 4 00:04:07.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.827 EAL: request: mp_malloc_sync 00:04:07.828 EAL: No shared files mode enabled, IPC is disabled 00:04:07.828 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.395 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.395 EAL: request: mp_malloc_sync 00:04:08.395 EAL: No shared files mode enabled, IPC is disabled 00:04:08.395 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.653 EAL: Trying to obtain current memory policy. 00:04:08.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.910 EAL: Restoring previous memory policy: 4 00:04:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.910 EAL: request: mp_malloc_sync 00:04:08.910 EAL: No shared files mode enabled, IPC is disabled 00:04:08.910 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.847 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.847 EAL: request: mp_malloc_sync 00:04:09.847 EAL: No shared files mode enabled, IPC is disabled 00:04:09.847 EAL: Heap on socket 0 was shrunk by 514MB 00:04:10.785 EAL: Trying to obtain current memory policy. 00:04:10.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.785 EAL: Restoring previous memory policy: 4 00:04:10.785 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.785 EAL: request: mp_malloc_sync 00:04:10.785 EAL: No shared files mode enabled, IPC is disabled 00:04:10.785 EAL: Heap on socket 0 was expanded by 1026MB 00:04:12.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.949 EAL: request: mp_malloc_sync 00:04:12.949 EAL: No shared files mode enabled, IPC is disabled 00:04:12.949 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.856 passed 00:04:14.856 00:04:14.856 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.856 suites 1 1 n/a 0 0 00:04:14.856 tests 2 2 2 0 0 00:04:14.856 asserts 5817 5817 5817 0 n/a 00:04:14.856 00:04:14.856 Elapsed time = 8.247 seconds 00:04:14.856 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.856 EAL: request: mp_malloc_sync 00:04:14.856 EAL: No shared files mode enabled, IPC is disabled 00:04:14.856 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.856 EAL: No shared files mode enabled, IPC is disabled 00:04:14.856 EAL: No shared files mode enabled, IPC is disabled 00:04:14.856 EAL: No shared files mode enabled, IPC is disabled 00:04:14.856 00:04:14.856 real 0m8.571s 00:04:14.856 user 0m7.586s 00:04:14.856 sys 0m0.824s 00:04:14.856 19:32:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.856 19:32:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.856 ************************************ 00:04:14.856 END TEST env_vtophys 00:04:14.856 ************************************ 00:04:14.856 19:32:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.856 19:32:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.856 19:32:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.856 19:32:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.856 ************************************ 00:04:14.856 START TEST env_pci 00:04:14.856 ************************************ 00:04:14.856 19:32:57 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.856 00:04:14.856 00:04:14.856 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.856 http://cunit.sourceforge.net/ 00:04:14.856 00:04:14.856 00:04:14.856 Suite: pci 00:04:14.856 Test: pci_hook ...[2024-12-12 19:32:57.464442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58537 has claimed it 00:04:14.856 passed 00:04:14.856 00:04:14.856 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.856 suites 1 1 n/a 0 0 00:04:14.856 tests 1 1 1 0 0 00:04:14.856 asserts 25 25 25 0 n/a 00:04:14.856 00:04:14.856 Elapsed time = 0.007 seconds 00:04:14.856 EAL: Cannot find device (10000:00:01.0) 00:04:14.856 EAL: Failed to attach device on primary process 00:04:14.856 00:04:14.856 real 0m0.108s 00:04:14.856 user 0m0.055s 00:04:14.856 sys 0m0.052s 00:04:14.856 19:32:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.856 19:32:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.856 ************************************ 00:04:14.856 END TEST env_pci 00:04:14.857 ************************************ 00:04:14.857 19:32:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.857 19:32:57 env -- env/env.sh@15 -- # uname 00:04:14.857 19:32:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.857 19:32:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.857 19:32:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.857 19:32:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:14.857 19:32:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.857 19:32:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.857 ************************************ 00:04:14.857 START TEST env_dpdk_post_init 00:04:14.857 ************************************ 00:04:14.857 19:32:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.857 EAL: Detected CPU lcores: 10 00:04:14.857 EAL: Detected NUMA nodes: 1 00:04:14.857 EAL: Detected shared linkage of DPDK 00:04:14.857 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.857 EAL: Selected IOVA mode 'PA' 00:04:15.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.116 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:15.116 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:15.116 Starting DPDK initialization... 00:04:15.116 Starting SPDK post initialization... 00:04:15.116 SPDK NVMe probe 00:04:15.116 Attaching to 0000:00:10.0 00:04:15.116 Attaching to 0000:00:11.0 00:04:15.116 Attached to 0000:00:10.0 00:04:15.116 Attached to 0000:00:11.0 00:04:15.116 Cleaning up... 00:04:15.116 00:04:15.116 real 0m0.280s 00:04:15.116 user 0m0.086s 00:04:15.116 sys 0m0.094s 00:04:15.116 19:32:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.116 19:32:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.116 ************************************ 00:04:15.116 END TEST env_dpdk_post_init 00:04:15.116 ************************************ 00:04:15.116 19:32:57 env -- env/env.sh@26 -- # uname 00:04:15.116 19:32:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:15.117 19:32:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.117 19:32:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.117 19:32:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.117 19:32:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.117 ************************************ 00:04:15.117 START TEST env_mem_callbacks 00:04:15.117 ************************************ 00:04:15.117 19:32:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.376 EAL: Detected CPU lcores: 10 00:04:15.376 EAL: Detected NUMA nodes: 1 00:04:15.376 EAL: Detected shared linkage of DPDK 00:04:15.376 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.376 EAL: Selected IOVA mode 'PA' 00:04:15.376 00:04:15.376 00:04:15.376 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.376 http://cunit.sourceforge.net/ 00:04:15.376 00:04:15.376 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.376 00:04:15.376 Suite: memory 00:04:15.376 Test: test ... 00:04:15.376 register 0x200000200000 2097152 00:04:15.376 malloc 3145728 00:04:15.376 register 0x200000400000 4194304 00:04:15.376 buf 0x2000004fffc0 len 3145728 PASSED 00:04:15.376 malloc 64 00:04:15.376 buf 0x2000004ffec0 len 64 PASSED 00:04:15.376 malloc 4194304 00:04:15.376 register 0x200000800000 6291456 00:04:15.376 buf 0x2000009fffc0 len 4194304 PASSED 00:04:15.376 free 0x2000004fffc0 3145728 00:04:15.376 free 0x2000004ffec0 64 00:04:15.376 unregister 0x200000400000 4194304 PASSED 00:04:15.376 free 0x2000009fffc0 4194304 00:04:15.376 unregister 0x200000800000 6291456 PASSED 00:04:15.376 malloc 8388608 00:04:15.376 register 0x200000400000 10485760 00:04:15.376 buf 0x2000005fffc0 len 8388608 PASSED 00:04:15.376 free 0x2000005fffc0 8388608 00:04:15.376 unregister 0x200000400000 10485760 PASSED 00:04:15.376 passed 00:04:15.376 00:04:15.376 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.376 suites 1 1 n/a 0 0 00:04:15.376 tests 1 1 1 0 0 00:04:15.376 asserts 15 15 15 0 n/a 00:04:15.376 00:04:15.376 Elapsed time = 0.089 seconds 00:04:15.636 00:04:15.636 real 0m0.294s 00:04:15.636 user 0m0.121s 00:04:15.636 sys 0m0.070s 00:04:15.636 19:32:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.636 19:32:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.636 ************************************ 00:04:15.636 END TEST env_mem_callbacks 00:04:15.636 ************************************ 00:04:15.636 00:04:15.636 real 0m10.113s 00:04:15.636 user 0m8.362s 00:04:15.636 sys 0m1.383s 00:04:15.636 19:32:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.636 19:32:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.636 ************************************ 00:04:15.636 END TEST env 00:04:15.636 ************************************ 00:04:15.636 19:32:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.636 19:32:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.637 19:32:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.637 19:32:58 -- common/autotest_common.sh@10 -- # set +x 00:04:15.637 ************************************ 00:04:15.637 START TEST rpc 00:04:15.637 ************************************ 00:04:15.637 19:32:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.637 * Looking for test storage... 00:04:15.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.637 19:32:58 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.637 19:32:58 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.637 19:32:58 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.897 19:32:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.897 19:32:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.897 19:32:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.897 19:32:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.897 19:32:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.897 19:32:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.897 19:32:58 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.897 19:32:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.897 19:32:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.897 19:32:58 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.897 19:32:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.897 19:32:58 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.897 19:32:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.897 19:32:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.897 19:32:58 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.897 --rc genhtml_branch_coverage=1 00:04:15.897 --rc genhtml_function_coverage=1 00:04:15.897 --rc genhtml_legend=1 00:04:15.897 --rc geninfo_all_blocks=1 00:04:15.897 --rc geninfo_unexecuted_blocks=1 00:04:15.897 00:04:15.897 ' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.897 --rc genhtml_branch_coverage=1 00:04:15.897 --rc genhtml_function_coverage=1 00:04:15.897 --rc genhtml_legend=1 00:04:15.897 --rc geninfo_all_blocks=1 00:04:15.897 --rc geninfo_unexecuted_blocks=1 00:04:15.897 00:04:15.897 ' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.897 --rc genhtml_branch_coverage=1 00:04:15.897 --rc genhtml_function_coverage=1 00:04:15.897 --rc genhtml_legend=1 00:04:15.897 --rc geninfo_all_blocks=1 00:04:15.897 --rc geninfo_unexecuted_blocks=1 00:04:15.897 00:04:15.897 ' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.897 --rc genhtml_branch_coverage=1 00:04:15.897 --rc genhtml_function_coverage=1 00:04:15.897 --rc genhtml_legend=1 00:04:15.897 --rc geninfo_all_blocks=1 00:04:15.897 --rc geninfo_unexecuted_blocks=1 00:04:15.897 00:04:15.897 ' 00:04:15.897 19:32:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58669 00:04:15.897 19:32:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.897 19:32:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.897 19:32:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58669 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 58669 ']' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.897 19:32:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.897 [2024-12-12 19:32:58.662607] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:15.897 [2024-12-12 19:32:58.662739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:04:16.157 [2024-12-12 19:32:58.832333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.157 [2024-12-12 19:32:58.957518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.157 [2024-12-12 19:32:58.957594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58669' to capture a snapshot of events at runtime. 00:04:16.157 [2024-12-12 19:32:58.957606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.157 [2024-12-12 19:32:58.957617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.157 [2024-12-12 19:32:58.957625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58669 for offline analysis/debug. 00:04:16.157 [2024-12-12 19:32:58.959048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.096 19:32:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.096 19:32:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:17.096 19:32:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.096 19:32:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.096 19:32:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:17.096 19:32:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:17.096 19:32:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.096 19:32:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.096 19:32:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.096 ************************************ 00:04:17.096 START TEST rpc_integrity 00:04:17.096 ************************************ 00:04:17.096 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.096 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.096 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.096 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.096 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.096 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.096 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.355 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.355 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.355 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.355 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.355 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.355 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:17.355 19:32:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.355 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.355 19:32:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.355 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.355 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.355 { 00:04:17.355 "name": "Malloc0", 00:04:17.355 "aliases": [ 00:04:17.355 "4c25fda6-3ef1-497f-a790-e84d480ffded" 00:04:17.355 ], 00:04:17.355 "product_name": "Malloc disk", 00:04:17.355 "block_size": 512, 00:04:17.355 "num_blocks": 16384, 00:04:17.355 "uuid": "4c25fda6-3ef1-497f-a790-e84d480ffded", 00:04:17.355 "assigned_rate_limits": { 00:04:17.355 "rw_ios_per_sec": 0, 00:04:17.355 "rw_mbytes_per_sec": 0, 00:04:17.355 "r_mbytes_per_sec": 0, 00:04:17.355 "w_mbytes_per_sec": 0 00:04:17.355 }, 00:04:17.355 "claimed": false, 00:04:17.355 "zoned": false, 00:04:17.355 "supported_io_types": { 00:04:17.355 "read": true, 00:04:17.355 "write": true, 00:04:17.355 "unmap": true, 00:04:17.355 "flush": true, 00:04:17.355 "reset": true, 00:04:17.355 "nvme_admin": false, 00:04:17.355 "nvme_io": false, 00:04:17.355 "nvme_io_md": false, 00:04:17.355 "write_zeroes": true, 00:04:17.355 "zcopy": true, 00:04:17.355 "get_zone_info": false, 00:04:17.355 "zone_management": false, 00:04:17.355 "zone_append": false, 00:04:17.355 "compare": false, 00:04:17.355 "compare_and_write": false, 00:04:17.355 "abort": true, 00:04:17.355 "seek_hole": false, 00:04:17.355 "seek_data": false, 00:04:17.355 "copy": true, 00:04:17.355 "nvme_iov_md": false 00:04:17.355 }, 00:04:17.355 "memory_domains": [ 00:04:17.355 { 00:04:17.355 "dma_device_id": "system", 00:04:17.355 "dma_device_type": 1 00:04:17.355 }, 00:04:17.356 { 00:04:17.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.356 "dma_device_type": 2 00:04:17.356 } 00:04:17.356 ], 00:04:17.356 "driver_specific": {} 00:04:17.356 } 00:04:17.356 ]' 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 [2024-12-12 19:33:00.073312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:17.356 [2024-12-12 19:33:00.073397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.356 [2024-12-12 19:33:00.073428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:17.356 [2024-12-12 19:33:00.073477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.356 [2024-12-12 19:33:00.076011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.356 [2024-12-12 19:33:00.076066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.356 Passthru0 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.356 { 00:04:17.356 "name": "Malloc0", 00:04:17.356 "aliases": [ 00:04:17.356 "4c25fda6-3ef1-497f-a790-e84d480ffded" 00:04:17.356 ], 00:04:17.356 "product_name": "Malloc disk", 00:04:17.356 "block_size": 512, 00:04:17.356 "num_blocks": 16384, 00:04:17.356 "uuid": "4c25fda6-3ef1-497f-a790-e84d480ffded", 00:04:17.356 "assigned_rate_limits": { 00:04:17.356 "rw_ios_per_sec": 0, 00:04:17.356 "rw_mbytes_per_sec": 0, 00:04:17.356 "r_mbytes_per_sec": 0, 00:04:17.356 "w_mbytes_per_sec": 0 00:04:17.356 }, 00:04:17.356 "claimed": true, 00:04:17.356 "claim_type": "exclusive_write", 00:04:17.356 "zoned": false, 00:04:17.356 "supported_io_types": { 00:04:17.356 "read": true, 00:04:17.356 "write": true, 00:04:17.356 "unmap": true, 00:04:17.356 "flush": true, 00:04:17.356 "reset": true, 00:04:17.356 "nvme_admin": false, 00:04:17.356 "nvme_io": false, 00:04:17.356 "nvme_io_md": false, 00:04:17.356 "write_zeroes": true, 00:04:17.356 "zcopy": true, 00:04:17.356 "get_zone_info": false, 00:04:17.356 "zone_management": false, 00:04:17.356 "zone_append": false, 00:04:17.356 "compare": false, 00:04:17.356 "compare_and_write": false, 00:04:17.356 "abort": true, 00:04:17.356 "seek_hole": false, 00:04:17.356 "seek_data": false, 00:04:17.356 "copy": true, 00:04:17.356 "nvme_iov_md": false 00:04:17.356 }, 00:04:17.356 "memory_domains": [ 00:04:17.356 { 00:04:17.356 "dma_device_id": "system", 00:04:17.356 "dma_device_type": 1 00:04:17.356 }, 00:04:17.356 { 00:04:17.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.356 "dma_device_type": 2 00:04:17.356 } 00:04:17.356 ], 00:04:17.356 "driver_specific": {} 00:04:17.356 }, 00:04:17.356 { 00:04:17.356 "name": "Passthru0", 00:04:17.356 "aliases": [ 00:04:17.356 "21c7eb50-4cb2-5e5f-8e83-9568c2d371dc" 00:04:17.356 ], 00:04:17.356 "product_name": "passthru", 00:04:17.356 "block_size": 512, 00:04:17.356 "num_blocks": 16384, 00:04:17.356 "uuid": "21c7eb50-4cb2-5e5f-8e83-9568c2d371dc", 00:04:17.356 "assigned_rate_limits": { 00:04:17.356 "rw_ios_per_sec": 0, 00:04:17.356 "rw_mbytes_per_sec": 0, 00:04:17.356 "r_mbytes_per_sec": 0, 00:04:17.356 "w_mbytes_per_sec": 0 00:04:17.356 }, 00:04:17.356 "claimed": false, 00:04:17.356 "zoned": false, 00:04:17.356 "supported_io_types": { 00:04:17.356 "read": true, 00:04:17.356 "write": true, 00:04:17.356 "unmap": true, 00:04:17.356 "flush": true, 00:04:17.356 "reset": true, 00:04:17.356 "nvme_admin": false, 00:04:17.356 "nvme_io": false, 00:04:17.356 "nvme_io_md": false, 00:04:17.356 "write_zeroes": true, 00:04:17.356 "zcopy": true, 00:04:17.356 "get_zone_info": false, 00:04:17.356 "zone_management": false, 00:04:17.356 "zone_append": false, 00:04:17.356 "compare": false, 00:04:17.356 "compare_and_write": false, 00:04:17.356 "abort": true, 00:04:17.356 "seek_hole": false, 00:04:17.356 "seek_data": false, 00:04:17.356 "copy": true, 00:04:17.356 "nvme_iov_md": false 00:04:17.356 }, 00:04:17.356 "memory_domains": [ 00:04:17.356 { 00:04:17.356 "dma_device_id": "system", 00:04:17.356 "dma_device_type": 1 00:04:17.356 }, 00:04:17.356 { 00:04:17.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.356 "dma_device_type": 2 00:04:17.356 } 00:04:17.356 ], 00:04:17.356 "driver_specific": { 00:04:17.356 "passthru": { 00:04:17.356 "name": "Passthru0", 00:04:17.356 "base_bdev_name": "Malloc0" 00:04:17.356 } 00:04:17.356 } 00:04:17.356 } 00:04:17.356 ]' 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.356 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.356 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.616 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.616 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.616 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.616 19:33:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.616 00:04:17.616 real 0m0.352s 00:04:17.616 user 0m0.189s 00:04:17.616 sys 0m0.055s 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.616 19:33:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 ************************************ 00:04:17.616 END TEST rpc_integrity 00:04:17.616 ************************************ 00:04:17.616 19:33:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:17.616 19:33:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.616 19:33:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.616 19:33:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 ************************************ 00:04:17.616 START TEST rpc_plugins 00:04:17.616 ************************************ 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:17.616 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.616 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:17.616 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.616 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:17.616 { 00:04:17.616 "name": "Malloc1", 00:04:17.616 "aliases": [ 00:04:17.616 "7bac8829-ad33-46e7-bd50-28fbff5b0992" 00:04:17.616 ], 00:04:17.616 "product_name": "Malloc disk", 00:04:17.616 "block_size": 4096, 00:04:17.616 "num_blocks": 256, 00:04:17.616 "uuid": "7bac8829-ad33-46e7-bd50-28fbff5b0992", 00:04:17.616 "assigned_rate_limits": { 00:04:17.616 "rw_ios_per_sec": 0, 00:04:17.616 "rw_mbytes_per_sec": 0, 00:04:17.616 "r_mbytes_per_sec": 0, 00:04:17.616 "w_mbytes_per_sec": 0 00:04:17.616 }, 00:04:17.616 "claimed": false, 00:04:17.616 "zoned": false, 00:04:17.616 "supported_io_types": { 00:04:17.616 "read": true, 00:04:17.616 "write": true, 00:04:17.616 "unmap": true, 00:04:17.616 "flush": true, 00:04:17.616 "reset": true, 00:04:17.616 "nvme_admin": false, 00:04:17.616 "nvme_io": false, 00:04:17.616 "nvme_io_md": false, 00:04:17.616 "write_zeroes": true, 00:04:17.616 "zcopy": true, 00:04:17.616 "get_zone_info": false, 00:04:17.616 "zone_management": false, 00:04:17.616 "zone_append": false, 00:04:17.616 "compare": false, 00:04:17.616 "compare_and_write": false, 00:04:17.616 "abort": true, 00:04:17.616 "seek_hole": false, 00:04:17.616 "seek_data": false, 00:04:17.616 "copy": true, 00:04:17.616 "nvme_iov_md": false 00:04:17.616 }, 00:04:17.616 "memory_domains": [ 00:04:17.616 { 00:04:17.616 "dma_device_id": "system", 00:04:17.616 "dma_device_type": 1 00:04:17.616 }, 00:04:17.616 { 00:04:17.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.617 "dma_device_type": 2 00:04:17.617 } 00:04:17.617 ], 00:04:17.617 "driver_specific": {} 00:04:17.617 } 00:04:17.617 ]' 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.617 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:17.617 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:17.877 19:33:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.877 00:04:17.877 real 0m0.158s 00:04:17.877 user 0m0.088s 00:04:17.877 sys 0m0.025s 00:04:17.877 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.877 19:33:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.877 ************************************ 00:04:17.877 END TEST rpc_plugins 00:04:17.877 ************************************ 00:04:17.877 19:33:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.877 19:33:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.877 19:33:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.877 19:33:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.877 ************************************ 00:04:17.877 START TEST rpc_trace_cmd_test 00:04:17.877 ************************************ 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.877 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58669", 00:04:17.877 "tpoint_group_mask": "0x8", 00:04:17.877 "iscsi_conn": { 00:04:17.877 "mask": "0x2", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "scsi": { 00:04:17.877 "mask": "0x4", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "bdev": { 00:04:17.877 "mask": "0x8", 00:04:17.877 "tpoint_mask": "0xffffffffffffffff" 00:04:17.877 }, 00:04:17.877 "nvmf_rdma": { 00:04:17.877 "mask": "0x10", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "nvmf_tcp": { 00:04:17.877 "mask": "0x20", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "ftl": { 00:04:17.877 "mask": "0x40", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "blobfs": { 00:04:17.877 "mask": "0x80", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "dsa": { 00:04:17.877 "mask": "0x200", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "thread": { 00:04:17.877 "mask": "0x400", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "nvme_pcie": { 00:04:17.877 "mask": "0x800", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "iaa": { 00:04:17.877 "mask": "0x1000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "nvme_tcp": { 00:04:17.877 "mask": "0x2000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "bdev_nvme": { 00:04:17.877 "mask": "0x4000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "sock": { 00:04:17.877 "mask": "0x8000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "blob": { 00:04:17.877 "mask": "0x10000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "bdev_raid": { 00:04:17.877 "mask": "0x20000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 }, 00:04:17.877 "scheduler": { 00:04:17.877 "mask": "0x40000", 00:04:17.877 "tpoint_mask": "0x0" 00:04:17.877 } 00:04:17.877 }' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.877 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:18.137 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:18.137 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:18.137 19:33:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:18.137 00:04:18.137 real 0m0.271s 00:04:18.137 user 0m0.212s 00:04:18.137 sys 0m0.044s 00:04:18.137 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.137 19:33:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.137 ************************************ 00:04:18.137 END TEST rpc_trace_cmd_test 00:04:18.137 ************************************ 00:04:18.137 19:33:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:18.137 19:33:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:18.137 19:33:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:18.137 19:33:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.137 19:33:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.137 19:33:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.137 ************************************ 00:04:18.137 START TEST rpc_daemon_integrity 00:04:18.138 ************************************ 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.138 { 00:04:18.138 "name": "Malloc2", 00:04:18.138 "aliases": [ 00:04:18.138 "8f232cf7-ba3a-4b21-b06f-aacb76dd9dcb" 00:04:18.138 ], 00:04:18.138 "product_name": "Malloc disk", 00:04:18.138 "block_size": 512, 00:04:18.138 "num_blocks": 16384, 00:04:18.138 "uuid": "8f232cf7-ba3a-4b21-b06f-aacb76dd9dcb", 00:04:18.138 "assigned_rate_limits": { 00:04:18.138 "rw_ios_per_sec": 0, 00:04:18.138 "rw_mbytes_per_sec": 0, 00:04:18.138 "r_mbytes_per_sec": 0, 00:04:18.138 "w_mbytes_per_sec": 0 00:04:18.138 }, 00:04:18.138 "claimed": false, 00:04:18.138 "zoned": false, 00:04:18.138 "supported_io_types": { 00:04:18.138 "read": true, 00:04:18.138 "write": true, 00:04:18.138 "unmap": true, 00:04:18.138 "flush": true, 00:04:18.138 "reset": true, 00:04:18.138 "nvme_admin": false, 00:04:18.138 "nvme_io": false, 00:04:18.138 "nvme_io_md": false, 00:04:18.138 "write_zeroes": true, 00:04:18.138 "zcopy": true, 00:04:18.138 "get_zone_info": false, 00:04:18.138 "zone_management": false, 00:04:18.138 "zone_append": false, 00:04:18.138 "compare": false, 00:04:18.138 "compare_and_write": false, 00:04:18.138 "abort": true, 00:04:18.138 "seek_hole": false, 00:04:18.138 "seek_data": false, 00:04:18.138 "copy": true, 00:04:18.138 "nvme_iov_md": false 00:04:18.138 }, 00:04:18.138 "memory_domains": [ 00:04:18.138 { 00:04:18.138 "dma_device_id": "system", 00:04:18.138 "dma_device_type": 1 00:04:18.138 }, 00:04:18.138 { 00:04:18.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.138 "dma_device_type": 2 00:04:18.138 } 00:04:18.138 ], 00:04:18.138 "driver_specific": {} 00:04:18.138 } 00:04:18.138 ]' 00:04:18.138 19:33:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 [2024-12-12 19:33:01.027882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:18.398 [2024-12-12 19:33:01.027962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.398 [2024-12-12 19:33:01.027989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:18.398 [2024-12-12 19:33:01.028002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.398 [2024-12-12 19:33:01.030535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.398 [2024-12-12 19:33:01.030600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.398 Passthru0 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.398 { 00:04:18.398 "name": "Malloc2", 00:04:18.398 "aliases": [ 00:04:18.398 "8f232cf7-ba3a-4b21-b06f-aacb76dd9dcb" 00:04:18.398 ], 00:04:18.398 "product_name": "Malloc disk", 00:04:18.398 "block_size": 512, 00:04:18.398 "num_blocks": 16384, 00:04:18.398 "uuid": "8f232cf7-ba3a-4b21-b06f-aacb76dd9dcb", 00:04:18.398 "assigned_rate_limits": { 00:04:18.398 "rw_ios_per_sec": 0, 00:04:18.398 "rw_mbytes_per_sec": 0, 00:04:18.398 "r_mbytes_per_sec": 0, 00:04:18.398 "w_mbytes_per_sec": 0 00:04:18.398 }, 00:04:18.398 "claimed": true, 00:04:18.398 "claim_type": "exclusive_write", 00:04:18.398 "zoned": false, 00:04:18.398 "supported_io_types": { 00:04:18.398 "read": true, 00:04:18.398 "write": true, 00:04:18.398 "unmap": true, 00:04:18.398 "flush": true, 00:04:18.398 "reset": true, 00:04:18.398 "nvme_admin": false, 00:04:18.398 "nvme_io": false, 00:04:18.398 "nvme_io_md": false, 00:04:18.398 "write_zeroes": true, 00:04:18.398 "zcopy": true, 00:04:18.398 "get_zone_info": false, 00:04:18.398 "zone_management": false, 00:04:18.398 "zone_append": false, 00:04:18.398 "compare": false, 00:04:18.398 "compare_and_write": false, 00:04:18.398 "abort": true, 00:04:18.398 "seek_hole": false, 00:04:18.398 "seek_data": false, 00:04:18.398 "copy": true, 00:04:18.398 "nvme_iov_md": false 00:04:18.398 }, 00:04:18.398 "memory_domains": [ 00:04:18.398 { 00:04:18.398 "dma_device_id": "system", 00:04:18.398 "dma_device_type": 1 00:04:18.398 }, 00:04:18.398 { 00:04:18.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.398 "dma_device_type": 2 00:04:18.398 } 00:04:18.398 ], 00:04:18.398 "driver_specific": {} 00:04:18.398 }, 00:04:18.398 { 00:04:18.398 "name": "Passthru0", 00:04:18.398 "aliases": [ 00:04:18.398 "8807c2fd-714a-54f2-b41b-8f368fe2f89b" 00:04:18.398 ], 00:04:18.398 "product_name": "passthru", 00:04:18.398 "block_size": 512, 00:04:18.398 "num_blocks": 16384, 00:04:18.398 "uuid": "8807c2fd-714a-54f2-b41b-8f368fe2f89b", 00:04:18.398 "assigned_rate_limits": { 00:04:18.398 "rw_ios_per_sec": 0, 00:04:18.398 "rw_mbytes_per_sec": 0, 00:04:18.398 "r_mbytes_per_sec": 0, 00:04:18.398 "w_mbytes_per_sec": 0 00:04:18.398 }, 00:04:18.398 "claimed": false, 00:04:18.398 "zoned": false, 00:04:18.398 "supported_io_types": { 00:04:18.398 "read": true, 00:04:18.398 "write": true, 00:04:18.398 "unmap": true, 00:04:18.398 "flush": true, 00:04:18.398 "reset": true, 00:04:18.398 "nvme_admin": false, 00:04:18.398 "nvme_io": false, 00:04:18.398 "nvme_io_md": false, 00:04:18.398 "write_zeroes": true, 00:04:18.398 "zcopy": true, 00:04:18.398 "get_zone_info": false, 00:04:18.398 "zone_management": false, 00:04:18.398 "zone_append": false, 00:04:18.398 "compare": false, 00:04:18.398 "compare_and_write": false, 00:04:18.398 "abort": true, 00:04:18.398 "seek_hole": false, 00:04:18.398 "seek_data": false, 00:04:18.398 "copy": true, 00:04:18.398 "nvme_iov_md": false 00:04:18.398 }, 00:04:18.398 "memory_domains": [ 00:04:18.398 { 00:04:18.398 "dma_device_id": "system", 00:04:18.398 "dma_device_type": 1 00:04:18.398 }, 00:04:18.398 { 00:04:18.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.398 "dma_device_type": 2 00:04:18.398 } 00:04:18.398 ], 00:04:18.398 "driver_specific": { 00:04:18.398 "passthru": { 00:04:18.398 "name": "Passthru0", 00:04:18.398 "base_bdev_name": "Malloc2" 00:04:18.398 } 00:04:18.398 } 00:04:18.398 } 00:04:18.398 ]' 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.398 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.399 00:04:18.399 real 0m0.355s 00:04:18.399 user 0m0.189s 00:04:18.399 sys 0m0.062s 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.399 19:33:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.399 ************************************ 00:04:18.399 END TEST rpc_daemon_integrity 00:04:18.399 ************************************ 00:04:18.658 19:33:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:18.658 19:33:01 rpc -- rpc/rpc.sh@84 -- # killprocess 58669 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 58669 ']' 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@958 -- # kill -0 58669 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58669 00:04:18.658 killing process with pid 58669 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58669' 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@973 -- # kill 58669 00:04:18.658 19:33:01 rpc -- common/autotest_common.sh@978 -- # wait 58669 00:04:21.194 00:04:21.194 real 0m5.418s 00:04:21.194 user 0m5.978s 00:04:21.194 sys 0m0.953s 00:04:21.194 19:33:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.194 19:33:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.194 ************************************ 00:04:21.194 END TEST rpc 00:04:21.194 ************************************ 00:04:21.194 19:33:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.194 19:33:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.194 19:33:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.194 19:33:03 -- common/autotest_common.sh@10 -- # set +x 00:04:21.194 ************************************ 00:04:21.194 START TEST skip_rpc 00:04:21.194 ************************************ 00:04:21.194 19:33:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.194 * Looking for test storage... 00:04:21.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.194 19:33:03 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.194 19:33:03 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.194 19:33:03 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.194 19:33:04 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.194 19:33:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.194 19:33:04 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.194 19:33:04 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.194 --rc genhtml_branch_coverage=1 00:04:21.194 --rc genhtml_function_coverage=1 00:04:21.194 --rc genhtml_legend=1 00:04:21.194 --rc geninfo_all_blocks=1 00:04:21.194 --rc geninfo_unexecuted_blocks=1 00:04:21.194 00:04:21.194 ' 00:04:21.194 19:33:04 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 19:33:04 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 19:33:04 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 19:33:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.454 19:33:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.454 19:33:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:21.454 19:33:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.454 19:33:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.454 19:33:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.454 ************************************ 00:04:21.454 START TEST skip_rpc 00:04:21.454 ************************************ 00:04:21.454 19:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:21.454 19:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58898 00:04:21.454 19:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:21.454 19:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.454 19:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:21.454 [2024-12-12 19:33:04.151533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:21.454 [2024-12-12 19:33:04.151686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:04:21.712 [2024-12-12 19:33:04.326632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.712 [2024-12-12 19:33:04.439360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58898 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58898 ']' 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58898 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58898 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.982 killing process with pid 58898 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58898' 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58898 00:04:26.982 19:33:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58898 00:04:28.892 00:04:28.892 real 0m7.449s 00:04:28.892 user 0m6.999s 00:04:28.892 sys 0m0.369s 00:04:28.892 19:33:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.892 19:33:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.892 ************************************ 00:04:28.892 END TEST skip_rpc 00:04:28.892 ************************************ 00:04:28.892 19:33:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:28.892 19:33:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.892 19:33:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.892 19:33:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.892 ************************************ 00:04:28.892 START TEST skip_rpc_with_json 00:04:28.892 ************************************ 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59008 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59008 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59008 ']' 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.892 19:33:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.892 [2024-12-12 19:33:11.661831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:28.892 [2024-12-12 19:33:11.662359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:04:29.152 [2024-12-12 19:33:11.818349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.152 [2024-12-12 19:33:11.933349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.121 [2024-12-12 19:33:12.796204] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.121 request: 00:04:30.121 { 00:04:30.121 "trtype": "tcp", 00:04:30.121 "method": "nvmf_get_transports", 00:04:30.121 "req_id": 1 00:04:30.121 } 00:04:30.121 Got JSON-RPC error response 00:04:30.121 response: 00:04:30.121 { 00:04:30.121 "code": -19, 00:04:30.121 "message": "No such device" 00:04:30.121 } 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.121 [2024-12-12 19:33:12.808305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.121 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.381 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.381 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.381 { 00:04:30.381 "subsystems": [ 00:04:30.381 { 00:04:30.381 "subsystem": "fsdev", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "fsdev_set_opts", 00:04:30.381 "params": { 00:04:30.381 "fsdev_io_pool_size": 65535, 00:04:30.381 "fsdev_io_cache_size": 256 00:04:30.381 } 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "keyring", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "iobuf", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "iobuf_set_options", 00:04:30.381 "params": { 00:04:30.381 "small_pool_count": 8192, 00:04:30.381 "large_pool_count": 1024, 00:04:30.381 "small_bufsize": 8192, 00:04:30.381 "large_bufsize": 135168, 00:04:30.381 "enable_numa": false 00:04:30.381 } 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "sock", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "sock_set_default_impl", 00:04:30.381 "params": { 00:04:30.381 "impl_name": "posix" 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "sock_impl_set_options", 00:04:30.381 "params": { 00:04:30.381 "impl_name": "ssl", 00:04:30.381 "recv_buf_size": 4096, 00:04:30.381 "send_buf_size": 4096, 00:04:30.381 "enable_recv_pipe": true, 00:04:30.381 "enable_quickack": false, 00:04:30.381 "enable_placement_id": 0, 00:04:30.381 "enable_zerocopy_send_server": true, 00:04:30.381 "enable_zerocopy_send_client": false, 00:04:30.381 "zerocopy_threshold": 0, 00:04:30.381 "tls_version": 0, 00:04:30.381 "enable_ktls": false 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "sock_impl_set_options", 00:04:30.381 "params": { 00:04:30.381 "impl_name": "posix", 00:04:30.381 "recv_buf_size": 2097152, 00:04:30.381 "send_buf_size": 2097152, 00:04:30.381 "enable_recv_pipe": true, 00:04:30.381 "enable_quickack": false, 00:04:30.381 "enable_placement_id": 0, 00:04:30.381 "enable_zerocopy_send_server": true, 00:04:30.381 "enable_zerocopy_send_client": false, 00:04:30.381 "zerocopy_threshold": 0, 00:04:30.381 "tls_version": 0, 00:04:30.381 "enable_ktls": false 00:04:30.381 } 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "vmd", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "accel", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "accel_set_options", 00:04:30.381 "params": { 00:04:30.381 "small_cache_size": 128, 00:04:30.381 "large_cache_size": 16, 00:04:30.381 "task_count": 2048, 00:04:30.381 "sequence_count": 2048, 00:04:30.381 "buf_count": 2048 00:04:30.381 } 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "bdev", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "bdev_set_options", 00:04:30.381 "params": { 00:04:30.381 "bdev_io_pool_size": 65535, 00:04:30.381 "bdev_io_cache_size": 256, 00:04:30.381 "bdev_auto_examine": true, 00:04:30.381 "iobuf_small_cache_size": 128, 00:04:30.381 "iobuf_large_cache_size": 16 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "bdev_raid_set_options", 00:04:30.381 "params": { 00:04:30.381 "process_window_size_kb": 1024, 00:04:30.381 "process_max_bandwidth_mb_sec": 0 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "bdev_iscsi_set_options", 00:04:30.381 "params": { 00:04:30.381 "timeout_sec": 30 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "bdev_nvme_set_options", 00:04:30.381 "params": { 00:04:30.381 "action_on_timeout": "none", 00:04:30.381 "timeout_us": 0, 00:04:30.381 "timeout_admin_us": 0, 00:04:30.381 "keep_alive_timeout_ms": 10000, 00:04:30.381 "arbitration_burst": 0, 00:04:30.381 "low_priority_weight": 0, 00:04:30.381 "medium_priority_weight": 0, 00:04:30.381 "high_priority_weight": 0, 00:04:30.381 "nvme_adminq_poll_period_us": 10000, 00:04:30.381 "nvme_ioq_poll_period_us": 0, 00:04:30.381 "io_queue_requests": 0, 00:04:30.381 "delay_cmd_submit": true, 00:04:30.381 "transport_retry_count": 4, 00:04:30.381 "bdev_retry_count": 3, 00:04:30.381 "transport_ack_timeout": 0, 00:04:30.381 "ctrlr_loss_timeout_sec": 0, 00:04:30.381 "reconnect_delay_sec": 0, 00:04:30.381 "fast_io_fail_timeout_sec": 0, 00:04:30.381 "disable_auto_failback": false, 00:04:30.381 "generate_uuids": false, 00:04:30.381 "transport_tos": 0, 00:04:30.381 "nvme_error_stat": false, 00:04:30.381 "rdma_srq_size": 0, 00:04:30.381 "io_path_stat": false, 00:04:30.381 "allow_accel_sequence": false, 00:04:30.381 "rdma_max_cq_size": 0, 00:04:30.381 "rdma_cm_event_timeout_ms": 0, 00:04:30.381 "dhchap_digests": [ 00:04:30.381 "sha256", 00:04:30.381 "sha384", 00:04:30.381 "sha512" 00:04:30.381 ], 00:04:30.381 "dhchap_dhgroups": [ 00:04:30.381 "null", 00:04:30.381 "ffdhe2048", 00:04:30.381 "ffdhe3072", 00:04:30.381 "ffdhe4096", 00:04:30.381 "ffdhe6144", 00:04:30.381 "ffdhe8192" 00:04:30.381 ], 00:04:30.381 "rdma_umr_per_io": false 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "bdev_nvme_set_hotplug", 00:04:30.381 "params": { 00:04:30.381 "period_us": 100000, 00:04:30.381 "enable": false 00:04:30.381 } 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "method": "bdev_wait_for_examine" 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "scsi", 00:04:30.381 "config": null 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "scheduler", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "framework_set_scheduler", 00:04:30.381 "params": { 00:04:30.381 "name": "static" 00:04:30.381 } 00:04:30.381 } 00:04:30.381 ] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "vhost_scsi", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "vhost_blk", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "ublk", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "nbd", 00:04:30.381 "config": [] 00:04:30.381 }, 00:04:30.381 { 00:04:30.381 "subsystem": "nvmf", 00:04:30.381 "config": [ 00:04:30.381 { 00:04:30.381 "method": "nvmf_set_config", 00:04:30.381 "params": { 00:04:30.381 "discovery_filter": "match_any", 00:04:30.381 "admin_cmd_passthru": { 00:04:30.381 "identify_ctrlr": false 00:04:30.381 }, 00:04:30.381 "dhchap_digests": [ 00:04:30.381 "sha256", 00:04:30.381 "sha384", 00:04:30.381 "sha512" 00:04:30.382 ], 00:04:30.382 "dhchap_dhgroups": [ 00:04:30.382 "null", 00:04:30.382 "ffdhe2048", 00:04:30.382 "ffdhe3072", 00:04:30.382 "ffdhe4096", 00:04:30.382 "ffdhe6144", 00:04:30.382 "ffdhe8192" 00:04:30.382 ] 00:04:30.382 } 00:04:30.382 }, 00:04:30.382 { 00:04:30.382 "method": "nvmf_set_max_subsystems", 00:04:30.382 "params": { 00:04:30.382 "max_subsystems": 1024 00:04:30.382 } 00:04:30.382 }, 00:04:30.382 { 00:04:30.382 "method": "nvmf_set_crdt", 00:04:30.382 "params": { 00:04:30.382 "crdt1": 0, 00:04:30.382 "crdt2": 0, 00:04:30.382 "crdt3": 0 00:04:30.382 } 00:04:30.382 }, 00:04:30.382 { 00:04:30.382 "method": "nvmf_create_transport", 00:04:30.382 "params": { 00:04:30.382 "trtype": "TCP", 00:04:30.382 "max_queue_depth": 128, 00:04:30.382 "max_io_qpairs_per_ctrlr": 127, 00:04:30.382 "in_capsule_data_size": 4096, 00:04:30.382 "max_io_size": 131072, 00:04:30.382 "io_unit_size": 131072, 00:04:30.382 "max_aq_depth": 128, 00:04:30.382 "num_shared_buffers": 511, 00:04:30.382 "buf_cache_size": 4294967295, 00:04:30.382 "dif_insert_or_strip": false, 00:04:30.382 "zcopy": false, 00:04:30.382 "c2h_success": true, 00:04:30.382 "sock_priority": 0, 00:04:30.382 "abort_timeout_sec": 1, 00:04:30.382 "ack_timeout": 0, 00:04:30.382 "data_wr_pool_size": 0 00:04:30.382 } 00:04:30.382 } 00:04:30.382 ] 00:04:30.382 }, 00:04:30.382 { 00:04:30.382 "subsystem": "iscsi", 00:04:30.382 "config": [ 00:04:30.382 { 00:04:30.382 "method": "iscsi_set_options", 00:04:30.382 "params": { 00:04:30.382 "node_base": "iqn.2016-06.io.spdk", 00:04:30.382 "max_sessions": 128, 00:04:30.382 "max_connections_per_session": 2, 00:04:30.382 "max_queue_depth": 64, 00:04:30.382 "default_time2wait": 2, 00:04:30.382 "default_time2retain": 20, 00:04:30.382 "first_burst_length": 8192, 00:04:30.382 "immediate_data": true, 00:04:30.382 "allow_duplicated_isid": false, 00:04:30.382 "error_recovery_level": 0, 00:04:30.382 "nop_timeout": 60, 00:04:30.382 "nop_in_interval": 30, 00:04:30.382 "disable_chap": false, 00:04:30.382 "require_chap": false, 00:04:30.382 "mutual_chap": false, 00:04:30.382 "chap_group": 0, 00:04:30.382 "max_large_datain_per_connection": 64, 00:04:30.382 "max_r2t_per_connection": 4, 00:04:30.382 "pdu_pool_size": 36864, 00:04:30.382 "immediate_data_pool_size": 16384, 00:04:30.382 "data_out_pool_size": 2048 00:04:30.382 } 00:04:30.382 } 00:04:30.382 ] 00:04:30.382 } 00:04:30.382 ] 00:04:30.382 } 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59008 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59008 ']' 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59008 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.382 19:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59008 00:04:30.382 19:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.382 19:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.382 killing process with pid 59008 00:04:30.382 19:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59008' 00:04:30.382 19:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59008 00:04:30.382 19:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59008 00:04:32.921 19:33:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59064 00:04:32.921 19:33:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.921 19:33:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59064 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59064 ']' 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59064 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59064 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.193 killing process with pid 59064 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59064' 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59064 00:04:38.193 19:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59064 00:04:40.099 19:33:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.099 19:33:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.099 00:04:40.099 real 0m11.337s 00:04:40.099 user 0m10.820s 00:04:40.099 sys 0m0.820s 00:04:40.099 19:33:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.099 19:33:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.099 ************************************ 00:04:40.099 END TEST skip_rpc_with_json 00:04:40.099 ************************************ 00:04:40.359 19:33:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.359 19:33:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.359 19:33:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.359 19:33:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 START TEST skip_rpc_with_delay 00:04:40.359 ************************************ 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.359 19:33:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.359 [2024-12-12 19:33:23.063133] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.359 00:04:40.359 real 0m0.163s 00:04:40.359 user 0m0.094s 00:04:40.359 sys 0m0.068s 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.359 19:33:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 END TEST skip_rpc_with_delay 00:04:40.359 ************************************ 00:04:40.359 19:33:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.359 19:33:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.359 19:33:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.359 19:33:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.359 19:33:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.359 19:33:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 START TEST exit_on_failed_rpc_init 00:04:40.359 ************************************ 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59192 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59192 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59192 ']' 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.359 19:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.618 [2024-12-12 19:33:23.290012] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:40.618 [2024-12-12 19:33:23.290126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59192 ] 00:04:40.878 [2024-12-12 19:33:23.464337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.878 [2024-12-12 19:33:23.580199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.816 19:33:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.816 [2024-12-12 19:33:24.542330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:41.816 [2024-12-12 19:33:24.542438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:04:42.075 [2024-12-12 19:33:24.719229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.075 [2024-12-12 19:33:24.840238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.075 [2024-12-12 19:33:24.840341] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.076 [2024-12-12 19:33:24.840355] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.076 [2024-12-12 19:33:24.840366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59192 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59192 ']' 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59192 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59192 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.335 killing process with pid 59192 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59192' 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59192 00:04:42.335 19:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59192 00:04:44.874 00:04:44.874 real 0m4.357s 00:04:44.874 user 0m4.692s 00:04:44.874 sys 0m0.561s 00:04:44.874 19:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.874 19:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.874 ************************************ 00:04:44.874 END TEST exit_on_failed_rpc_init 00:04:44.874 ************************************ 00:04:44.874 19:33:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.874 00:04:44.874 real 0m23.791s 00:04:44.874 user 0m22.810s 00:04:44.874 sys 0m2.112s 00:04:44.874 19:33:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.874 19:33:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.874 ************************************ 00:04:44.874 END TEST skip_rpc 00:04:44.874 ************************************ 00:04:44.874 19:33:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.874 19:33:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.874 19:33:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.874 19:33:27 -- common/autotest_common.sh@10 -- # set +x 00:04:44.874 ************************************ 00:04:44.874 START TEST rpc_client 00:04:44.874 ************************************ 00:04:44.874 19:33:27 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.134 * Looking for test storage... 00:04:45.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.135 19:33:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.135 --rc genhtml_branch_coverage=1 00:04:45.135 --rc genhtml_function_coverage=1 00:04:45.135 --rc genhtml_legend=1 00:04:45.135 --rc geninfo_all_blocks=1 00:04:45.135 --rc geninfo_unexecuted_blocks=1 00:04:45.135 00:04:45.135 ' 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.135 --rc genhtml_branch_coverage=1 00:04:45.135 --rc genhtml_function_coverage=1 00:04:45.135 --rc genhtml_legend=1 00:04:45.135 --rc geninfo_all_blocks=1 00:04:45.135 --rc geninfo_unexecuted_blocks=1 00:04:45.135 00:04:45.135 ' 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.135 --rc genhtml_branch_coverage=1 00:04:45.135 --rc genhtml_function_coverage=1 00:04:45.135 --rc genhtml_legend=1 00:04:45.135 --rc geninfo_all_blocks=1 00:04:45.135 --rc geninfo_unexecuted_blocks=1 00:04:45.135 00:04:45.135 ' 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.135 --rc genhtml_branch_coverage=1 00:04:45.135 --rc genhtml_function_coverage=1 00:04:45.135 --rc genhtml_legend=1 00:04:45.135 --rc geninfo_all_blocks=1 00:04:45.135 --rc geninfo_unexecuted_blocks=1 00:04:45.135 00:04:45.135 ' 00:04:45.135 19:33:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:45.135 OK 00:04:45.135 19:33:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:45.135 00:04:45.135 real 0m0.297s 00:04:45.135 user 0m0.150s 00:04:45.135 sys 0m0.164s 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.135 19:33:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:45.135 ************************************ 00:04:45.135 END TEST rpc_client 00:04:45.135 ************************************ 00:04:45.396 19:33:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.396 19:33:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.396 19:33:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.396 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.396 ************************************ 00:04:45.396 START TEST json_config 00:04:45.396 ************************************ 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.396 19:33:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.396 19:33:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.396 19:33:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.396 19:33:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.396 19:33:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.396 19:33:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:45.396 19:33:28 json_config -- scripts/common.sh@345 -- # : 1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.396 19:33:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.396 19:33:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@353 -- # local d=1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.396 19:33:28 json_config -- scripts/common.sh@355 -- # echo 1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.396 19:33:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@353 -- # local d=2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.396 19:33:28 json_config -- scripts/common.sh@355 -- # echo 2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.396 19:33:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.396 19:33:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.396 19:33:28 json_config -- scripts/common.sh@368 -- # return 0 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.396 --rc genhtml_branch_coverage=1 00:04:45.396 --rc genhtml_function_coverage=1 00:04:45.396 --rc genhtml_legend=1 00:04:45.396 --rc geninfo_all_blocks=1 00:04:45.396 --rc geninfo_unexecuted_blocks=1 00:04:45.396 00:04:45.396 ' 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.396 --rc genhtml_branch_coverage=1 00:04:45.396 --rc genhtml_function_coverage=1 00:04:45.396 --rc genhtml_legend=1 00:04:45.396 --rc geninfo_all_blocks=1 00:04:45.396 --rc geninfo_unexecuted_blocks=1 00:04:45.396 00:04:45.396 ' 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.396 --rc genhtml_branch_coverage=1 00:04:45.396 --rc genhtml_function_coverage=1 00:04:45.396 --rc genhtml_legend=1 00:04:45.396 --rc geninfo_all_blocks=1 00:04:45.396 --rc geninfo_unexecuted_blocks=1 00:04:45.396 00:04:45.396 ' 00:04:45.396 19:33:28 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.396 --rc genhtml_branch_coverage=1 00:04:45.396 --rc genhtml_function_coverage=1 00:04:45.396 --rc genhtml_legend=1 00:04:45.396 --rc geninfo_all_blocks=1 00:04:45.396 --rc geninfo_unexecuted_blocks=1 00:04:45.396 00:04:45.396 ' 00:04:45.396 19:33:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.396 19:33:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.397 19:33:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.397 19:33:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.397 19:33:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.397 19:33:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.397 19:33:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.397 19:33:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.397 19:33:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.397 19:33:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:45.397 19:33:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@51 -- # : 0 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.397 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.397 19:33:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.397 WARNING: No tests are enabled so not running JSON configuration tests 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:45.397 19:33:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:45.397 00:04:45.397 real 0m0.208s 00:04:45.397 user 0m0.126s 00:04:45.397 sys 0m0.089s 00:04:45.397 19:33:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.397 19:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.397 ************************************ 00:04:45.397 END TEST json_config 00:04:45.397 ************************************ 00:04:45.657 19:33:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.657 19:33:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.657 19:33:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.657 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.657 ************************************ 00:04:45.657 START TEST json_config_extra_key 00:04:45.657 ************************************ 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.657 --rc genhtml_branch_coverage=1 00:04:45.657 --rc genhtml_function_coverage=1 00:04:45.657 --rc genhtml_legend=1 00:04:45.657 --rc geninfo_all_blocks=1 00:04:45.657 --rc geninfo_unexecuted_blocks=1 00:04:45.657 00:04:45.657 ' 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.657 --rc genhtml_branch_coverage=1 00:04:45.657 --rc genhtml_function_coverage=1 00:04:45.657 --rc genhtml_legend=1 00:04:45.657 --rc geninfo_all_blocks=1 00:04:45.657 --rc geninfo_unexecuted_blocks=1 00:04:45.657 00:04:45.657 ' 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.657 --rc genhtml_branch_coverage=1 00:04:45.657 --rc genhtml_function_coverage=1 00:04:45.657 --rc genhtml_legend=1 00:04:45.657 --rc geninfo_all_blocks=1 00:04:45.657 --rc geninfo_unexecuted_blocks=1 00:04:45.657 00:04:45.657 ' 00:04:45.657 19:33:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.657 --rc genhtml_branch_coverage=1 00:04:45.657 --rc genhtml_function_coverage=1 00:04:45.657 --rc genhtml_legend=1 00:04:45.657 --rc geninfo_all_blocks=1 00:04:45.657 --rc geninfo_unexecuted_blocks=1 00:04:45.657 00:04:45.657 ' 00:04:45.657 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1950b98c-7192-4b5a-a8dc-2e6969d48a59 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.657 19:33:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.657 19:33:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.918 19:33:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.918 19:33:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.918 19:33:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.918 19:33:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.918 19:33:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.918 19:33:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.918 19:33:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.918 19:33:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.918 19:33:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.918 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.919 INFO: launching applications... 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.919 19:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59420 00:04:45.919 Waiting for target to run... 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59420 /var/tmp/spdk_tgt.sock 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59420 ']' 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.919 19:33:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.919 19:33:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.919 [2024-12-12 19:33:28.618773] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:45.919 [2024-12-12 19:33:28.618904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:04:46.179 [2024-12-12 19:33:29.010151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.439 [2024-12-12 19:33:29.120405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.378 19:33:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.378 19:33:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:47.378 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.378 INFO: shutting down applications... 00:04:47.378 19:33:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.378 19:33:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59420 ]] 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59420 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:47.378 19:33:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.653 19:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.653 19:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.653 19:33:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:47.653 19:33:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.241 19:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.241 19:33:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.241 19:33:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:48.241 19:33:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.812 19:33:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.812 19:33:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.812 19:33:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:48.812 19:33:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.072 19:33:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.072 19:33:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.072 19:33:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:49.072 19:33:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.642 19:33:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.642 19:33:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.642 19:33:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:49.642 19:33:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59420 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.214 19:33:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.214 SPDK target shutdown done 00:04:50.214 Success 00:04:50.214 19:33:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:50.214 00:04:50.214 real 0m4.617s 00:04:50.214 user 0m4.139s 00:04:50.214 sys 0m0.554s 00:04:50.214 19:33:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.214 19:33:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.214 ************************************ 00:04:50.214 END TEST json_config_extra_key 00:04:50.214 ************************************ 00:04:50.214 19:33:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.214 19:33:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.214 19:33:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.214 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:50.214 ************************************ 00:04:50.214 START TEST alias_rpc 00:04:50.214 ************************************ 00:04:50.214 19:33:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.474 * Looking for test storage... 00:04:50.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.474 19:33:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.474 --rc genhtml_branch_coverage=1 00:04:50.474 --rc genhtml_function_coverage=1 00:04:50.474 --rc genhtml_legend=1 00:04:50.474 --rc geninfo_all_blocks=1 00:04:50.474 --rc geninfo_unexecuted_blocks=1 00:04:50.474 00:04:50.474 ' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.474 --rc genhtml_branch_coverage=1 00:04:50.474 --rc genhtml_function_coverage=1 00:04:50.474 --rc genhtml_legend=1 00:04:50.474 --rc geninfo_all_blocks=1 00:04:50.474 --rc geninfo_unexecuted_blocks=1 00:04:50.474 00:04:50.474 ' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.474 --rc genhtml_branch_coverage=1 00:04:50.474 --rc genhtml_function_coverage=1 00:04:50.474 --rc genhtml_legend=1 00:04:50.474 --rc geninfo_all_blocks=1 00:04:50.474 --rc geninfo_unexecuted_blocks=1 00:04:50.474 00:04:50.474 ' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.474 --rc genhtml_branch_coverage=1 00:04:50.474 --rc genhtml_function_coverage=1 00:04:50.474 --rc genhtml_legend=1 00:04:50.474 --rc geninfo_all_blocks=1 00:04:50.474 --rc geninfo_unexecuted_blocks=1 00:04:50.474 00:04:50.474 ' 00:04:50.474 19:33:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.474 19:33:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59535 00:04:50.474 19:33:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.474 19:33:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59535 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59535 ']' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.474 19:33:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.474 [2024-12-12 19:33:33.313861] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.474 [2024-12-12 19:33:33.314000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:04:50.734 [2024-12-12 19:33:33.494464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.994 [2024-12-12 19:33:33.609762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.935 19:33:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:51.935 19:33:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59535 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59535 ']' 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59535 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59535 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.935 killing process with pid 59535 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59535' 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 59535 00:04:51.935 19:33:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 59535 00:04:54.475 00:04:54.475 real 0m4.184s 00:04:54.475 user 0m4.152s 00:04:54.475 sys 0m0.587s 00:04:54.475 19:33:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.475 19:33:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.475 ************************************ 00:04:54.475 END TEST alias_rpc 00:04:54.475 ************************************ 00:04:54.475 19:33:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.475 19:33:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.475 19:33:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.475 19:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.475 19:33:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.475 ************************************ 00:04:54.475 START TEST spdkcli_tcp 00:04:54.475 ************************************ 00:04:54.475 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.735 * Looking for test storage... 00:04:54.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.735 19:33:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.735 --rc genhtml_branch_coverage=1 00:04:54.735 --rc genhtml_function_coverage=1 00:04:54.735 --rc genhtml_legend=1 00:04:54.735 --rc geninfo_all_blocks=1 00:04:54.735 --rc geninfo_unexecuted_blocks=1 00:04:54.735 00:04:54.735 ' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.735 --rc genhtml_branch_coverage=1 00:04:54.735 --rc genhtml_function_coverage=1 00:04:54.735 --rc genhtml_legend=1 00:04:54.735 --rc geninfo_all_blocks=1 00:04:54.735 --rc geninfo_unexecuted_blocks=1 00:04:54.735 00:04:54.735 ' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.735 --rc genhtml_branch_coverage=1 00:04:54.735 --rc genhtml_function_coverage=1 00:04:54.735 --rc genhtml_legend=1 00:04:54.735 --rc geninfo_all_blocks=1 00:04:54.735 --rc geninfo_unexecuted_blocks=1 00:04:54.735 00:04:54.735 ' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.735 --rc genhtml_branch_coverage=1 00:04:54.735 --rc genhtml_function_coverage=1 00:04:54.735 --rc genhtml_legend=1 00:04:54.735 --rc geninfo_all_blocks=1 00:04:54.735 --rc geninfo_unexecuted_blocks=1 00:04:54.735 00:04:54.735 ' 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59643 00:04:54.735 19:33:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59643 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.735 19:33:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.735 [2024-12-12 19:33:37.575307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:54.735 [2024-12-12 19:33:37.575506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:04:54.994 [2024-12-12 19:33:37.757677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.253 [2024-12-12 19:33:37.882054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.253 [2024-12-12 19:33:37.882091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.191 19:33:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.191 19:33:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:56.191 19:33:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59661 00:04:56.191 19:33:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.191 19:33:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.191 [ 00:04:56.191 "bdev_malloc_delete", 00:04:56.191 "bdev_malloc_create", 00:04:56.191 "bdev_null_resize", 00:04:56.191 "bdev_null_delete", 00:04:56.191 "bdev_null_create", 00:04:56.191 "bdev_nvme_cuse_unregister", 00:04:56.191 "bdev_nvme_cuse_register", 00:04:56.191 "bdev_opal_new_user", 00:04:56.191 "bdev_opal_set_lock_state", 00:04:56.191 "bdev_opal_delete", 00:04:56.191 "bdev_opal_get_info", 00:04:56.191 "bdev_opal_create", 00:04:56.191 "bdev_nvme_opal_revert", 00:04:56.191 "bdev_nvme_opal_init", 00:04:56.191 "bdev_nvme_send_cmd", 00:04:56.191 "bdev_nvme_set_keys", 00:04:56.191 "bdev_nvme_get_path_iostat", 00:04:56.191 "bdev_nvme_get_mdns_discovery_info", 00:04:56.191 "bdev_nvme_stop_mdns_discovery", 00:04:56.191 "bdev_nvme_start_mdns_discovery", 00:04:56.191 "bdev_nvme_set_multipath_policy", 00:04:56.191 "bdev_nvme_set_preferred_path", 00:04:56.191 "bdev_nvme_get_io_paths", 00:04:56.191 "bdev_nvme_remove_error_injection", 00:04:56.191 "bdev_nvme_add_error_injection", 00:04:56.191 "bdev_nvme_get_discovery_info", 00:04:56.191 "bdev_nvme_stop_discovery", 00:04:56.191 "bdev_nvme_start_discovery", 00:04:56.191 "bdev_nvme_get_controller_health_info", 00:04:56.191 "bdev_nvme_disable_controller", 00:04:56.191 "bdev_nvme_enable_controller", 00:04:56.191 "bdev_nvme_reset_controller", 00:04:56.191 "bdev_nvme_get_transport_statistics", 00:04:56.191 "bdev_nvme_apply_firmware", 00:04:56.191 "bdev_nvme_detach_controller", 00:04:56.191 "bdev_nvme_get_controllers", 00:04:56.191 "bdev_nvme_attach_controller", 00:04:56.191 "bdev_nvme_set_hotplug", 00:04:56.191 "bdev_nvme_set_options", 00:04:56.191 "bdev_passthru_delete", 00:04:56.191 "bdev_passthru_create", 00:04:56.191 "bdev_lvol_set_parent_bdev", 00:04:56.191 "bdev_lvol_set_parent", 00:04:56.191 "bdev_lvol_check_shallow_copy", 00:04:56.191 "bdev_lvol_start_shallow_copy", 00:04:56.191 "bdev_lvol_grow_lvstore", 00:04:56.191 "bdev_lvol_get_lvols", 00:04:56.191 "bdev_lvol_get_lvstores", 00:04:56.191 "bdev_lvol_delete", 00:04:56.191 "bdev_lvol_set_read_only", 00:04:56.191 "bdev_lvol_resize", 00:04:56.191 "bdev_lvol_decouple_parent", 00:04:56.191 "bdev_lvol_inflate", 00:04:56.191 "bdev_lvol_rename", 00:04:56.191 "bdev_lvol_clone_bdev", 00:04:56.191 "bdev_lvol_clone", 00:04:56.191 "bdev_lvol_snapshot", 00:04:56.191 "bdev_lvol_create", 00:04:56.191 "bdev_lvol_delete_lvstore", 00:04:56.191 "bdev_lvol_rename_lvstore", 00:04:56.191 "bdev_lvol_create_lvstore", 00:04:56.191 "bdev_raid_set_options", 00:04:56.191 "bdev_raid_remove_base_bdev", 00:04:56.191 "bdev_raid_add_base_bdev", 00:04:56.191 "bdev_raid_delete", 00:04:56.191 "bdev_raid_create", 00:04:56.191 "bdev_raid_get_bdevs", 00:04:56.191 "bdev_error_inject_error", 00:04:56.191 "bdev_error_delete", 00:04:56.191 "bdev_error_create", 00:04:56.191 "bdev_split_delete", 00:04:56.191 "bdev_split_create", 00:04:56.191 "bdev_delay_delete", 00:04:56.191 "bdev_delay_create", 00:04:56.191 "bdev_delay_update_latency", 00:04:56.191 "bdev_zone_block_delete", 00:04:56.191 "bdev_zone_block_create", 00:04:56.191 "blobfs_create", 00:04:56.191 "blobfs_detect", 00:04:56.191 "blobfs_set_cache_size", 00:04:56.191 "bdev_aio_delete", 00:04:56.191 "bdev_aio_rescan", 00:04:56.191 "bdev_aio_create", 00:04:56.191 "bdev_ftl_set_property", 00:04:56.191 "bdev_ftl_get_properties", 00:04:56.191 "bdev_ftl_get_stats", 00:04:56.191 "bdev_ftl_unmap", 00:04:56.191 "bdev_ftl_unload", 00:04:56.191 "bdev_ftl_delete", 00:04:56.191 "bdev_ftl_load", 00:04:56.191 "bdev_ftl_create", 00:04:56.191 "bdev_virtio_attach_controller", 00:04:56.191 "bdev_virtio_scsi_get_devices", 00:04:56.191 "bdev_virtio_detach_controller", 00:04:56.191 "bdev_virtio_blk_set_hotplug", 00:04:56.191 "bdev_iscsi_delete", 00:04:56.191 "bdev_iscsi_create", 00:04:56.191 "bdev_iscsi_set_options", 00:04:56.191 "accel_error_inject_error", 00:04:56.191 "ioat_scan_accel_module", 00:04:56.191 "dsa_scan_accel_module", 00:04:56.191 "iaa_scan_accel_module", 00:04:56.191 "keyring_file_remove_key", 00:04:56.191 "keyring_file_add_key", 00:04:56.191 "keyring_linux_set_options", 00:04:56.191 "fsdev_aio_delete", 00:04:56.191 "fsdev_aio_create", 00:04:56.191 "iscsi_get_histogram", 00:04:56.191 "iscsi_enable_histogram", 00:04:56.191 "iscsi_set_options", 00:04:56.191 "iscsi_get_auth_groups", 00:04:56.191 "iscsi_auth_group_remove_secret", 00:04:56.191 "iscsi_auth_group_add_secret", 00:04:56.191 "iscsi_delete_auth_group", 00:04:56.191 "iscsi_create_auth_group", 00:04:56.191 "iscsi_set_discovery_auth", 00:04:56.191 "iscsi_get_options", 00:04:56.191 "iscsi_target_node_request_logout", 00:04:56.191 "iscsi_target_node_set_redirect", 00:04:56.192 "iscsi_target_node_set_auth", 00:04:56.192 "iscsi_target_node_add_lun", 00:04:56.192 "iscsi_get_stats", 00:04:56.192 "iscsi_get_connections", 00:04:56.192 "iscsi_portal_group_set_auth", 00:04:56.192 "iscsi_start_portal_group", 00:04:56.192 "iscsi_delete_portal_group", 00:04:56.192 "iscsi_create_portal_group", 00:04:56.192 "iscsi_get_portal_groups", 00:04:56.192 "iscsi_delete_target_node", 00:04:56.192 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.192 "iscsi_target_node_add_pg_ig_maps", 00:04:56.192 "iscsi_create_target_node", 00:04:56.192 "iscsi_get_target_nodes", 00:04:56.192 "iscsi_delete_initiator_group", 00:04:56.192 "iscsi_initiator_group_remove_initiators", 00:04:56.192 "iscsi_initiator_group_add_initiators", 00:04:56.192 "iscsi_create_initiator_group", 00:04:56.192 "iscsi_get_initiator_groups", 00:04:56.192 "nvmf_set_crdt", 00:04:56.192 "nvmf_set_config", 00:04:56.192 "nvmf_set_max_subsystems", 00:04:56.192 "nvmf_stop_mdns_prr", 00:04:56.192 "nvmf_publish_mdns_prr", 00:04:56.192 "nvmf_subsystem_get_listeners", 00:04:56.192 "nvmf_subsystem_get_qpairs", 00:04:56.192 "nvmf_subsystem_get_controllers", 00:04:56.192 "nvmf_get_stats", 00:04:56.192 "nvmf_get_transports", 00:04:56.192 "nvmf_create_transport", 00:04:56.192 "nvmf_get_targets", 00:04:56.192 "nvmf_delete_target", 00:04:56.192 "nvmf_create_target", 00:04:56.192 "nvmf_subsystem_allow_any_host", 00:04:56.192 "nvmf_subsystem_set_keys", 00:04:56.192 "nvmf_subsystem_remove_host", 00:04:56.192 "nvmf_subsystem_add_host", 00:04:56.192 "nvmf_ns_remove_host", 00:04:56.192 "nvmf_ns_add_host", 00:04:56.192 "nvmf_subsystem_remove_ns", 00:04:56.192 "nvmf_subsystem_set_ns_ana_group", 00:04:56.192 "nvmf_subsystem_add_ns", 00:04:56.192 "nvmf_subsystem_listener_set_ana_state", 00:04:56.192 "nvmf_discovery_get_referrals", 00:04:56.192 "nvmf_discovery_remove_referral", 00:04:56.192 "nvmf_discovery_add_referral", 00:04:56.192 "nvmf_subsystem_remove_listener", 00:04:56.192 "nvmf_subsystem_add_listener", 00:04:56.192 "nvmf_delete_subsystem", 00:04:56.192 "nvmf_create_subsystem", 00:04:56.192 "nvmf_get_subsystems", 00:04:56.192 "env_dpdk_get_mem_stats", 00:04:56.192 "nbd_get_disks", 00:04:56.192 "nbd_stop_disk", 00:04:56.192 "nbd_start_disk", 00:04:56.192 "ublk_recover_disk", 00:04:56.192 "ublk_get_disks", 00:04:56.192 "ublk_stop_disk", 00:04:56.192 "ublk_start_disk", 00:04:56.192 "ublk_destroy_target", 00:04:56.192 "ublk_create_target", 00:04:56.192 "virtio_blk_create_transport", 00:04:56.192 "virtio_blk_get_transports", 00:04:56.192 "vhost_controller_set_coalescing", 00:04:56.192 "vhost_get_controllers", 00:04:56.192 "vhost_delete_controller", 00:04:56.192 "vhost_create_blk_controller", 00:04:56.192 "vhost_scsi_controller_remove_target", 00:04:56.192 "vhost_scsi_controller_add_target", 00:04:56.192 "vhost_start_scsi_controller", 00:04:56.192 "vhost_create_scsi_controller", 00:04:56.192 "thread_set_cpumask", 00:04:56.192 "scheduler_set_options", 00:04:56.192 "framework_get_governor", 00:04:56.192 "framework_get_scheduler", 00:04:56.192 "framework_set_scheduler", 00:04:56.192 "framework_get_reactors", 00:04:56.192 "thread_get_io_channels", 00:04:56.192 "thread_get_pollers", 00:04:56.192 "thread_get_stats", 00:04:56.192 "framework_monitor_context_switch", 00:04:56.192 "spdk_kill_instance", 00:04:56.192 "log_enable_timestamps", 00:04:56.192 "log_get_flags", 00:04:56.192 "log_clear_flag", 00:04:56.192 "log_set_flag", 00:04:56.192 "log_get_level", 00:04:56.192 "log_set_level", 00:04:56.192 "log_get_print_level", 00:04:56.192 "log_set_print_level", 00:04:56.192 "framework_enable_cpumask_locks", 00:04:56.192 "framework_disable_cpumask_locks", 00:04:56.192 "framework_wait_init", 00:04:56.192 "framework_start_init", 00:04:56.192 "scsi_get_devices", 00:04:56.192 "bdev_get_histogram", 00:04:56.192 "bdev_enable_histogram", 00:04:56.192 "bdev_set_qos_limit", 00:04:56.192 "bdev_set_qd_sampling_period", 00:04:56.192 "bdev_get_bdevs", 00:04:56.192 "bdev_reset_iostat", 00:04:56.192 "bdev_get_iostat", 00:04:56.192 "bdev_examine", 00:04:56.192 "bdev_wait_for_examine", 00:04:56.192 "bdev_set_options", 00:04:56.192 "accel_get_stats", 00:04:56.192 "accel_set_options", 00:04:56.192 "accel_set_driver", 00:04:56.192 "accel_crypto_key_destroy", 00:04:56.192 "accel_crypto_keys_get", 00:04:56.192 "accel_crypto_key_create", 00:04:56.192 "accel_assign_opc", 00:04:56.192 "accel_get_module_info", 00:04:56.192 "accel_get_opc_assignments", 00:04:56.192 "vmd_rescan", 00:04:56.192 "vmd_remove_device", 00:04:56.192 "vmd_enable", 00:04:56.192 "sock_get_default_impl", 00:04:56.192 "sock_set_default_impl", 00:04:56.192 "sock_impl_set_options", 00:04:56.192 "sock_impl_get_options", 00:04:56.192 "iobuf_get_stats", 00:04:56.192 "iobuf_set_options", 00:04:56.192 "keyring_get_keys", 00:04:56.192 "framework_get_pci_devices", 00:04:56.192 "framework_get_config", 00:04:56.192 "framework_get_subsystems", 00:04:56.192 "fsdev_set_opts", 00:04:56.192 "fsdev_get_opts", 00:04:56.192 "trace_get_info", 00:04:56.192 "trace_get_tpoint_group_mask", 00:04:56.192 "trace_disable_tpoint_group", 00:04:56.192 "trace_enable_tpoint_group", 00:04:56.192 "trace_clear_tpoint_mask", 00:04:56.192 "trace_set_tpoint_mask", 00:04:56.192 "notify_get_notifications", 00:04:56.192 "notify_get_types", 00:04:56.192 "spdk_get_version", 00:04:56.192 "rpc_get_methods" 00:04:56.192 ] 00:04:56.192 19:33:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.192 19:33:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.192 19:33:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.451 19:33:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.451 19:33:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59643 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59643 ']' 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59643 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59643 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.451 killing process with pid 59643 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59643' 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59643 00:04:56.451 19:33:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59643 00:04:59.039 00:04:59.039 real 0m4.377s 00:04:59.039 user 0m7.814s 00:04:59.039 sys 0m0.649s 00:04:59.039 19:33:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.039 19:33:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.039 ************************************ 00:04:59.039 END TEST spdkcli_tcp 00:04:59.039 ************************************ 00:04:59.039 19:33:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.039 19:33:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.039 19:33:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.039 19:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:59.039 ************************************ 00:04:59.039 START TEST dpdk_mem_utility 00:04:59.039 ************************************ 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.039 * Looking for test storage... 00:04:59.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.039 19:33:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.039 --rc genhtml_branch_coverage=1 00:04:59.039 --rc genhtml_function_coverage=1 00:04:59.039 --rc genhtml_legend=1 00:04:59.039 --rc geninfo_all_blocks=1 00:04:59.039 --rc geninfo_unexecuted_blocks=1 00:04:59.039 00:04:59.039 ' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.039 --rc genhtml_branch_coverage=1 00:04:59.039 --rc genhtml_function_coverage=1 00:04:59.039 --rc genhtml_legend=1 00:04:59.039 --rc geninfo_all_blocks=1 00:04:59.039 --rc geninfo_unexecuted_blocks=1 00:04:59.039 00:04:59.039 ' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.039 --rc genhtml_branch_coverage=1 00:04:59.039 --rc genhtml_function_coverage=1 00:04:59.039 --rc genhtml_legend=1 00:04:59.039 --rc geninfo_all_blocks=1 00:04:59.039 --rc geninfo_unexecuted_blocks=1 00:04:59.039 00:04:59.039 ' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.039 --rc genhtml_branch_coverage=1 00:04:59.039 --rc genhtml_function_coverage=1 00:04:59.039 --rc genhtml_legend=1 00:04:59.039 --rc geninfo_all_blocks=1 00:04:59.039 --rc geninfo_unexecuted_blocks=1 00:04:59.039 00:04:59.039 ' 00:04:59.039 19:33:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.039 19:33:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.039 19:33:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59766 00:04:59.039 19:33:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59766 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59766 ']' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.039 19:33:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.298 [2024-12-12 19:33:41.993068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:59.298 [2024-12-12 19:33:41.993261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:04:59.557 [2024-12-12 19:33:42.194007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.557 [2024-12-12 19:33:42.314841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.496 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.496 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:00.496 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.496 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.496 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.496 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.496 { 00:05:00.496 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.496 } 00:05:00.496 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.496 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.496 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:00.496 1 heaps totaling size 824.000000 MiB 00:05:00.496 size: 824.000000 MiB heap id: 0 00:05:00.496 end heaps---------- 00:05:00.496 9 mempools totaling size 603.782043 MiB 00:05:00.496 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.496 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.496 size: 100.555481 MiB name: bdev_io_59766 00:05:00.496 size: 50.003479 MiB name: msgpool_59766 00:05:00.496 size: 36.509338 MiB name: fsdev_io_59766 00:05:00.496 size: 21.763794 MiB name: PDU_Pool 00:05:00.496 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.496 size: 4.133484 MiB name: evtpool_59766 00:05:00.496 size: 0.026123 MiB name: Session_Pool 00:05:00.496 end mempools------- 00:05:00.496 6 memzones totaling size 4.142822 MiB 00:05:00.496 size: 1.000366 MiB name: RG_ring_0_59766 00:05:00.496 size: 1.000366 MiB name: RG_ring_1_59766 00:05:00.496 size: 1.000366 MiB name: RG_ring_4_59766 00:05:00.496 size: 1.000366 MiB name: RG_ring_5_59766 00:05:00.496 size: 0.125366 MiB name: RG_ring_2_59766 00:05:00.496 size: 0.015991 MiB name: RG_ring_3_59766 00:05:00.496 end memzones------- 00:05:00.496 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.757 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:05:00.757 list of free elements. size: 16.779175 MiB 00:05:00.757 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:00.757 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:00.757 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:00.757 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:00.757 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:00.757 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:00.757 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:00.757 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:00.757 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:00.757 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:00.757 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:00.757 element at address: 0x20001b400000 with size: 0.560730 MiB 00:05:00.757 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:00.757 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:00.757 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:00.757 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:00.757 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:00.757 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:00.757 list of standard malloc elements. size: 199.289917 MiB 00:05:00.757 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:00.757 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:00.757 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:00.757 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:00.757 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:00.757 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.757 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:00.757 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.757 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:00.757 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:00.757 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:00.757 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:00.757 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:00.758 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:00.759 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:00.759 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:00.759 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:00.759 list of memzone associated elements. size: 607.930908 MiB 00:05:00.759 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:00.759 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.759 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:00.759 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.759 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:00.759 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59766_0 00:05:00.759 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:00.759 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59766_0 00:05:00.759 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:00.759 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59766_0 00:05:00.759 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:00.759 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.759 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:00.759 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.759 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:00.759 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59766_0 00:05:00.759 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:00.759 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59766 00:05:00.759 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.759 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59766 00:05:00.759 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:00.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.760 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:00.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.760 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:00.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.760 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:00.760 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.760 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:00.760 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59766 00:05:00.760 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:00.760 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59766 00:05:00.760 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:00.760 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59766 00:05:00.760 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:00.760 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59766 00:05:00.760 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:00.760 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59766 00:05:00.760 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:00.760 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59766 00:05:00.760 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:00.760 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.760 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:00.760 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.760 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:00.760 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.760 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:00.760 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59766 00:05:00.760 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:00.760 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59766 00:05:00.760 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:00.760 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.760 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:00.760 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.760 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:00.760 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59766 00:05:00.760 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:00.760 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.760 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:00.760 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59766 00:05:00.760 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:00.760 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59766 00:05:00.760 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:00.760 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59766 00:05:00.760 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:00.760 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.760 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.760 19:33:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59766 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59766 ']' 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59766 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59766 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.760 killing process with pid 59766 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59766' 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59766 00:05:00.760 19:33:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59766 00:05:03.296 00:05:03.296 real 0m4.128s 00:05:03.296 user 0m4.040s 00:05:03.296 sys 0m0.584s 00:05:03.296 19:33:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.296 19:33:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 ************************************ 00:05:03.296 END TEST dpdk_mem_utility 00:05:03.296 ************************************ 00:05:03.296 19:33:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.296 19:33:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.296 19:33:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.296 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 ************************************ 00:05:03.296 START TEST event 00:05:03.296 ************************************ 00:05:03.296 19:33:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.296 * Looking for test storage... 00:05:03.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.296 19:33:45 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.296 19:33:45 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.296 19:33:45 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.296 19:33:46 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.296 19:33:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.296 19:33:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.296 19:33:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.296 19:33:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.296 19:33:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.296 19:33:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.296 19:33:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.296 19:33:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.296 19:33:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.296 19:33:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.296 19:33:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.296 19:33:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.296 19:33:46 event -- scripts/common.sh@345 -- # : 1 00:05:03.296 19:33:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.296 19:33:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.296 19:33:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.297 19:33:46 event -- scripts/common.sh@353 -- # local d=1 00:05:03.297 19:33:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.297 19:33:46 event -- scripts/common.sh@355 -- # echo 1 00:05:03.297 19:33:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.297 19:33:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.297 19:33:46 event -- scripts/common.sh@353 -- # local d=2 00:05:03.297 19:33:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.297 19:33:46 event -- scripts/common.sh@355 -- # echo 2 00:05:03.297 19:33:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.297 19:33:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.297 19:33:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.297 19:33:46 event -- scripts/common.sh@368 -- # return 0 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.297 --rc genhtml_branch_coverage=1 00:05:03.297 --rc genhtml_function_coverage=1 00:05:03.297 --rc genhtml_legend=1 00:05:03.297 --rc geninfo_all_blocks=1 00:05:03.297 --rc geninfo_unexecuted_blocks=1 00:05:03.297 00:05:03.297 ' 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.297 --rc genhtml_branch_coverage=1 00:05:03.297 --rc genhtml_function_coverage=1 00:05:03.297 --rc genhtml_legend=1 00:05:03.297 --rc geninfo_all_blocks=1 00:05:03.297 --rc geninfo_unexecuted_blocks=1 00:05:03.297 00:05:03.297 ' 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.297 --rc genhtml_branch_coverage=1 00:05:03.297 --rc genhtml_function_coverage=1 00:05:03.297 --rc genhtml_legend=1 00:05:03.297 --rc geninfo_all_blocks=1 00:05:03.297 --rc geninfo_unexecuted_blocks=1 00:05:03.297 00:05:03.297 ' 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.297 --rc genhtml_branch_coverage=1 00:05:03.297 --rc genhtml_function_coverage=1 00:05:03.297 --rc genhtml_legend=1 00:05:03.297 --rc geninfo_all_blocks=1 00:05:03.297 --rc geninfo_unexecuted_blocks=1 00:05:03.297 00:05:03.297 ' 00:05:03.297 19:33:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.297 19:33:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.297 19:33:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.297 19:33:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.297 19:33:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.297 ************************************ 00:05:03.297 START TEST event_perf 00:05:03.297 ************************************ 00:05:03.297 19:33:46 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.556 Running I/O for 1 seconds...[2024-12-12 19:33:46.148133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:03.556 [2024-12-12 19:33:46.148238] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59874 ] 00:05:03.556 [2024-12-12 19:33:46.315877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.815 [2024-12-12 19:33:46.442722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.815 [2024-12-12 19:33:46.442865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.815 [2024-12-12 19:33:46.442947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.815 [2024-12-12 19:33:46.442980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.194 Running I/O for 1 seconds... 00:05:05.194 lcore 0: 201634 00:05:05.194 lcore 1: 201633 00:05:05.194 lcore 2: 201633 00:05:05.194 lcore 3: 201633 00:05:05.194 done. 00:05:05.194 00:05:05.194 real 0m1.586s 00:05:05.194 user 0m4.352s 00:05:05.194 sys 0m0.113s 00:05:05.194 19:33:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.194 19:33:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 END TEST event_perf 00:05:05.194 ************************************ 00:05:05.194 19:33:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.194 19:33:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.194 19:33:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.194 19:33:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 START TEST event_reactor 00:05:05.194 ************************************ 00:05:05.194 19:33:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.194 [2024-12-12 19:33:47.788050] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:05.194 [2024-12-12 19:33:47.788174] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:05:05.194 [2024-12-12 19:33:47.961943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.454 [2024-12-12 19:33:48.080222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.854 test_start 00:05:06.854 oneshot 00:05:06.854 tick 100 00:05:06.854 tick 100 00:05:06.854 tick 250 00:05:06.854 tick 100 00:05:06.854 tick 100 00:05:06.854 tick 100 00:05:06.854 tick 250 00:05:06.854 tick 500 00:05:06.854 tick 100 00:05:06.854 tick 100 00:05:06.854 tick 250 00:05:06.854 tick 100 00:05:06.854 tick 100 00:05:06.854 test_end 00:05:06.854 00:05:06.854 real 0m1.554s 00:05:06.854 user 0m1.345s 00:05:06.854 sys 0m0.100s 00:05:06.854 19:33:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.854 19:33:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.854 ************************************ 00:05:06.854 END TEST event_reactor 00:05:06.854 ************************************ 00:05:06.854 19:33:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.854 19:33:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.854 19:33:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.854 19:33:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.854 ************************************ 00:05:06.854 START TEST event_reactor_perf 00:05:06.854 ************************************ 00:05:06.854 19:33:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.854 [2024-12-12 19:33:49.392194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:06.854 [2024-12-12 19:33:49.392327] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:05:06.854 [2024-12-12 19:33:49.572466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.854 [2024-12-12 19:33:49.687961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.234 test_start 00:05:08.234 test_end 00:05:08.234 Performance: 395519 events per second 00:05:08.234 00:05:08.234 real 0m1.552s 00:05:08.234 user 0m1.358s 00:05:08.234 sys 0m0.086s 00:05:08.234 19:33:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.234 19:33:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.234 ************************************ 00:05:08.234 END TEST event_reactor_perf 00:05:08.234 ************************************ 00:05:08.234 19:33:50 event -- event/event.sh@49 -- # uname -s 00:05:08.234 19:33:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.234 19:33:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.234 19:33:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.234 19:33:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.234 19:33:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.234 ************************************ 00:05:08.234 START TEST event_scheduler 00:05:08.234 ************************************ 00:05:08.234 19:33:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.234 * Looking for test storage... 00:05:08.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.234 19:33:51 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.234 19:33:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.235 19:33:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.495 19:33:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.495 --rc genhtml_branch_coverage=1 00:05:08.495 --rc genhtml_function_coverage=1 00:05:08.495 --rc genhtml_legend=1 00:05:08.495 --rc geninfo_all_blocks=1 00:05:08.495 --rc geninfo_unexecuted_blocks=1 00:05:08.495 00:05:08.495 ' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.495 --rc genhtml_branch_coverage=1 00:05:08.495 --rc genhtml_function_coverage=1 00:05:08.495 --rc genhtml_legend=1 00:05:08.495 --rc geninfo_all_blocks=1 00:05:08.495 --rc geninfo_unexecuted_blocks=1 00:05:08.495 00:05:08.495 ' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.495 --rc genhtml_branch_coverage=1 00:05:08.495 --rc genhtml_function_coverage=1 00:05:08.495 --rc genhtml_legend=1 00:05:08.495 --rc geninfo_all_blocks=1 00:05:08.495 --rc geninfo_unexecuted_blocks=1 00:05:08.495 00:05:08.495 ' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.495 --rc genhtml_branch_coverage=1 00:05:08.495 --rc genhtml_function_coverage=1 00:05:08.495 --rc genhtml_legend=1 00:05:08.495 --rc geninfo_all_blocks=1 00:05:08.495 --rc geninfo_unexecuted_blocks=1 00:05:08.495 00:05:08.495 ' 00:05:08.495 19:33:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.495 19:33:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60026 00:05:08.495 19:33:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.495 19:33:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.495 19:33:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60026 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60026 ']' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.495 19:33:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.495 [2024-12-12 19:33:51.231228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:08.495 [2024-12-12 19:33:51.231429] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60026 ] 00:05:08.755 [2024-12-12 19:33:51.400619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.755 [2024-12-12 19:33:51.523226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.755 [2024-12-12 19:33:51.523559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.755 [2024-12-12 19:33:51.523393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.755 [2024-12-12 19:33:51.523608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.325 19:33:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.325 19:33:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.325 19:33:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.325 19:33:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.325 19:33:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.326 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.326 POWER: Cannot set governor of lcore 0 to performance 00:05:09.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.326 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.326 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.326 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.326 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.326 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.326 [2024-12-12 19:33:52.096244] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.326 [2024-12-12 19:33:52.096285] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:09.326 [2024-12-12 19:33:52.096315] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.326 [2024-12-12 19:33:52.096352] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.326 [2024-12-12 19:33:52.096379] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.326 [2024-12-12 19:33:52.096406] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.326 19:33:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.326 19:33:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.326 19:33:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.326 19:33:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.586 [2024-12-12 19:33:52.424658] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.586 19:33:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.586 19:33:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.586 19:33:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.586 19:33:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.586 19:33:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 ************************************ 00:05:09.846 START TEST scheduler_create_thread 00:05:09.846 ************************************ 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 2 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 3 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 4 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.846 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 5 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 6 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 7 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 8 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 9 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 10 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.847 19:33:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.226 19:33:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.226 19:33:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.226 19:33:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.226 19:33:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.226 19:33:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.601 19:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.601 00:05:12.601 real 0m2.617s 00:05:12.601 user 0m0.024s 00:05:12.601 sys 0m0.005s 00:05:12.601 19:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.601 ************************************ 00:05:12.601 END TEST scheduler_create_thread 00:05:12.601 ************************************ 00:05:12.601 19:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.601 19:33:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.601 19:33:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60026 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60026 ']' 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60026 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60026 00:05:12.601 killing process with pid 60026 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60026' 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60026 00:05:12.601 19:33:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60026 00:05:12.860 [2024-12-12 19:33:55.532714] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.236 ************************************ 00:05:14.236 END TEST event_scheduler 00:05:14.236 ************************************ 00:05:14.236 00:05:14.236 real 0m5.733s 00:05:14.236 user 0m9.896s 00:05:14.236 sys 0m0.498s 00:05:14.236 19:33:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.236 19:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.236 19:33:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.236 19:33:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.236 19:33:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.236 19:33:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.236 19:33:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.236 ************************************ 00:05:14.236 START TEST app_repeat 00:05:14.236 ************************************ 00:05:14.236 19:33:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:14.236 19:33:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.236 19:33:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.236 19:33:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.236 19:33:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.236 19:33:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60138 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60138' 00:05:14.237 Process app_repeat pid: 60138 00:05:14.237 spdk_app_start Round 0 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.237 19:33:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60138 /var/tmp/spdk-nbd.sock 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60138 ']' 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.237 19:33:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.237 [2024-12-12 19:33:56.846214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:14.237 [2024-12-12 19:33:56.846406] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60138 ] 00:05:14.237 [2024-12-12 19:33:57.019884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.495 [2024-12-12 19:33:57.139746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.495 [2024-12-12 19:33:57.139780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.063 19:33:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.063 19:33:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.063 19:33:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.321 Malloc0 00:05:15.321 19:33:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.613 Malloc1 00:05:15.613 19:33:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.613 19:33:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.614 19:33:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.873 /dev/nbd0 00:05:15.873 19:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.873 19:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.873 1+0 records in 00:05:15.873 1+0 records out 00:05:15.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225053 s, 18.2 MB/s 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.873 19:33:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.873 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.873 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.873 19:33:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.132 /dev/nbd1 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.132 1+0 records in 00:05:16.132 1+0 records out 00:05:16.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023038 s, 17.8 MB/s 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.132 19:33:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.132 19:33:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.391 { 00:05:16.391 "nbd_device": "/dev/nbd0", 00:05:16.391 "bdev_name": "Malloc0" 00:05:16.391 }, 00:05:16.391 { 00:05:16.391 "nbd_device": "/dev/nbd1", 00:05:16.391 "bdev_name": "Malloc1" 00:05:16.391 } 00:05:16.391 ]' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.391 { 00:05:16.391 "nbd_device": "/dev/nbd0", 00:05:16.391 "bdev_name": "Malloc0" 00:05:16.391 }, 00:05:16.391 { 00:05:16.391 "nbd_device": "/dev/nbd1", 00:05:16.391 "bdev_name": "Malloc1" 00:05:16.391 } 00:05:16.391 ]' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.391 /dev/nbd1' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.391 /dev/nbd1' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.391 256+0 records in 00:05:16.391 256+0 records out 00:05:16.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140345 s, 74.7 MB/s 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.391 256+0 records in 00:05:16.391 256+0 records out 00:05:16.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217186 s, 48.3 MB/s 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.391 256+0 records in 00:05:16.391 256+0 records out 00:05:16.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254916 s, 41.1 MB/s 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.391 19:33:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.650 19:33:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.909 19:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.165 19:33:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.165 19:33:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.733 19:34:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.668 [2024-12-12 19:34:01.457606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.927 [2024-12-12 19:34:01.573922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.927 [2024-12-12 19:34:01.573922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.185 [2024-12-12 19:34:01.771127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.185 [2024-12-12 19:34:01.771198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.562 spdk_app_start Round 1 00:05:20.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.562 19:34:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.562 19:34:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.562 19:34:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60138 /var/tmp/spdk-nbd.sock 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60138 ']' 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.562 19:34:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.820 19:34:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.820 19:34:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.820 19:34:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.078 Malloc0 00:05:21.079 19:34:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.337 Malloc1 00:05:21.337 19:34:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.337 19:34:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.338 19:34:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.596 /dev/nbd0 00:05:21.597 19:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.597 19:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.597 1+0 records in 00:05:21.597 1+0 records out 00:05:21.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479688 s, 8.5 MB/s 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.597 19:34:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.597 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.597 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.597 19:34:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.856 /dev/nbd1 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.856 1+0 records in 00:05:21.856 1+0 records out 00:05:21.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026524 s, 15.4 MB/s 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.856 19:34:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.856 19:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.114 { 00:05:22.114 "nbd_device": "/dev/nbd0", 00:05:22.114 "bdev_name": "Malloc0" 00:05:22.114 }, 00:05:22.114 { 00:05:22.114 "nbd_device": "/dev/nbd1", 00:05:22.114 "bdev_name": "Malloc1" 00:05:22.114 } 00:05:22.114 ]' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.114 { 00:05:22.114 "nbd_device": "/dev/nbd0", 00:05:22.114 "bdev_name": "Malloc0" 00:05:22.114 }, 00:05:22.114 { 00:05:22.114 "nbd_device": "/dev/nbd1", 00:05:22.114 "bdev_name": "Malloc1" 00:05:22.114 } 00:05:22.114 ]' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.114 /dev/nbd1' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.114 /dev/nbd1' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.114 256+0 records in 00:05:22.114 256+0 records out 00:05:22.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126314 s, 83.0 MB/s 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.114 256+0 records in 00:05:22.114 256+0 records out 00:05:22.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346051 s, 30.3 MB/s 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.114 256+0 records in 00:05:22.114 256+0 records out 00:05:22.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300539 s, 34.9 MB/s 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.114 19:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.115 19:34:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.383 19:34:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.639 19:34:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.897 19:34:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.897 19:34:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.155 19:34:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.532 [2024-12-12 19:34:07.107382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.532 [2024-12-12 19:34:07.219305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.532 [2024-12-12 19:34:07.219329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.790 [2024-12-12 19:34:07.408812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.790 [2024-12-12 19:34:07.408884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.167 spdk_app_start Round 2 00:05:26.167 19:34:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.167 19:34:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.167 19:34:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60138 /var/tmp/spdk-nbd.sock 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60138 ']' 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.167 19:34:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.433 19:34:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.433 19:34:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.433 19:34:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.709 Malloc0 00:05:26.709 19:34:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.967 Malloc1 00:05:26.967 19:34:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.967 19:34:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.226 /dev/nbd0 00:05:27.226 19:34:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.226 19:34:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.226 1+0 records in 00:05:27.226 1+0 records out 00:05:27.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539529 s, 7.6 MB/s 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.226 19:34:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.226 19:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.226 19:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.226 19:34:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.485 /dev/nbd1 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.485 1+0 records in 00:05:27.485 1+0 records out 00:05:27.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004719 s, 8.7 MB/s 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.485 19:34:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.485 19:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.744 { 00:05:27.744 "nbd_device": "/dev/nbd0", 00:05:27.744 "bdev_name": "Malloc0" 00:05:27.744 }, 00:05:27.744 { 00:05:27.744 "nbd_device": "/dev/nbd1", 00:05:27.744 "bdev_name": "Malloc1" 00:05:27.744 } 00:05:27.744 ]' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.744 { 00:05:27.744 "nbd_device": "/dev/nbd0", 00:05:27.744 "bdev_name": "Malloc0" 00:05:27.744 }, 00:05:27.744 { 00:05:27.744 "nbd_device": "/dev/nbd1", 00:05:27.744 "bdev_name": "Malloc1" 00:05:27.744 } 00:05:27.744 ]' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.744 /dev/nbd1' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.744 /dev/nbd1' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.744 256+0 records in 00:05:27.744 256+0 records out 00:05:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489048 s, 214 MB/s 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.744 256+0 records in 00:05:27.744 256+0 records out 00:05:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242085 s, 43.3 MB/s 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.744 256+0 records in 00:05:27.744 256+0 records out 00:05:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260323 s, 40.3 MB/s 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.744 19:34:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.003 19:34:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.261 19:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.519 19:34:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.519 19:34:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.085 19:34:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.021 [2024-12-12 19:34:12.812891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.279 [2024-12-12 19:34:12.924120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.279 [2024-12-12 19:34:12.924126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.279 [2024-12-12 19:34:13.115074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.279 [2024-12-12 19:34:13.115168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.180 19:34:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60138 /var/tmp/spdk-nbd.sock 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60138 ']' 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.180 19:34:14 event.app_repeat -- event/event.sh@39 -- # killprocess 60138 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60138 ']' 00:05:32.180 19:34:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60138 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60138 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.181 killing process with pid 60138 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60138' 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60138 00:05:32.181 19:34:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60138 00:05:33.115 spdk_app_start is called in Round 0. 00:05:33.115 Shutdown signal received, stop current app iteration 00:05:33.115 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:33.115 spdk_app_start is called in Round 1. 00:05:33.115 Shutdown signal received, stop current app iteration 00:05:33.115 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:33.115 spdk_app_start is called in Round 2. 00:05:33.115 Shutdown signal received, stop current app iteration 00:05:33.115 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:33.115 spdk_app_start is called in Round 3. 00:05:33.115 Shutdown signal received, stop current app iteration 00:05:33.375 19:34:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.375 19:34:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.375 00:05:33.375 real 0m19.190s 00:05:33.375 user 0m41.065s 00:05:33.375 sys 0m2.744s 00:05:33.375 19:34:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.375 19:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.375 ************************************ 00:05:33.375 END TEST app_repeat 00:05:33.375 ************************************ 00:05:33.375 19:34:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.375 19:34:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.375 19:34:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.375 19:34:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.375 19:34:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.375 ************************************ 00:05:33.375 START TEST cpu_locks 00:05:33.375 ************************************ 00:05:33.375 19:34:16 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.375 * Looking for test storage... 00:05:33.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.375 19:34:16 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.375 19:34:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.375 19:34:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.635 19:34:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.635 19:34:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.636 19:34:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.636 --rc genhtml_branch_coverage=1 00:05:33.636 --rc genhtml_function_coverage=1 00:05:33.636 --rc genhtml_legend=1 00:05:33.636 --rc geninfo_all_blocks=1 00:05:33.636 --rc geninfo_unexecuted_blocks=1 00:05:33.636 00:05:33.636 ' 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.636 --rc genhtml_branch_coverage=1 00:05:33.636 --rc genhtml_function_coverage=1 00:05:33.636 --rc genhtml_legend=1 00:05:33.636 --rc geninfo_all_blocks=1 00:05:33.636 --rc geninfo_unexecuted_blocks=1 00:05:33.636 00:05:33.636 ' 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.636 --rc genhtml_branch_coverage=1 00:05:33.636 --rc genhtml_function_coverage=1 00:05:33.636 --rc genhtml_legend=1 00:05:33.636 --rc geninfo_all_blocks=1 00:05:33.636 --rc geninfo_unexecuted_blocks=1 00:05:33.636 00:05:33.636 ' 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.636 --rc genhtml_branch_coverage=1 00:05:33.636 --rc genhtml_function_coverage=1 00:05:33.636 --rc genhtml_legend=1 00:05:33.636 --rc geninfo_all_blocks=1 00:05:33.636 --rc geninfo_unexecuted_blocks=1 00:05:33.636 00:05:33.636 ' 00:05:33.636 19:34:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.636 19:34:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.636 19:34:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.636 19:34:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.636 19:34:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.636 ************************************ 00:05:33.636 START TEST default_locks 00:05:33.636 ************************************ 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60580 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60580 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60580 ']' 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.636 19:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.636 [2024-12-12 19:34:16.360230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:33.636 [2024-12-12 19:34:16.360368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60580 ] 00:05:33.895 [2024-12-12 19:34:16.535381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.895 [2024-12-12 19:34:16.648652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.830 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.830 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:34.830 19:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60580 00:05:34.830 19:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.830 19:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60580 ']' 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.089 killing process with pid 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60580' 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60580 00:05:35.089 19:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60580 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60580 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60580 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60580 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60580 ']' 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60580) - No such process 00:05:37.625 ERROR: process (pid: 60580) is no longer running 00:05:37.625 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.626 00:05:37.626 real 0m3.886s 00:05:37.626 user 0m3.811s 00:05:37.626 sys 0m0.581s 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.626 19:34:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.626 ************************************ 00:05:37.626 END TEST default_locks 00:05:37.626 ************************************ 00:05:37.626 19:34:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.626 19:34:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.626 19:34:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.626 19:34:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.626 ************************************ 00:05:37.626 START TEST default_locks_via_rpc 00:05:37.626 ************************************ 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60650 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60650 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.626 19:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.626 [2024-12-12 19:34:20.312626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.626 [2024-12-12 19:34:20.312794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:05:37.885 [2024-12-12 19:34:20.487854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.885 [2024-12-12 19:34:20.605559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60650 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60650 00:05:38.822 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60650 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60650 ']' 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60650 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60650 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.389 killing process with pid 60650 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60650' 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60650 00:05:39.389 19:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60650 00:05:41.922 00:05:41.922 real 0m4.104s 00:05:41.922 user 0m4.071s 00:05:41.922 sys 0m0.673s 00:05:41.922 19:34:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.922 19:34:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.922 ************************************ 00:05:41.922 END TEST default_locks_via_rpc 00:05:41.922 ************************************ 00:05:41.922 19:34:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:41.922 19:34:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.922 19:34:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.922 19:34:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.922 ************************************ 00:05:41.922 START TEST non_locking_app_on_locked_coremask 00:05:41.922 ************************************ 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60729 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60729 /var/tmp/spdk.sock 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.922 19:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.922 [2024-12-12 19:34:24.476802] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:41.922 [2024-12-12 19:34:24.476942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:05:41.922 [2024-12-12 19:34:24.651219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.180 [2024-12-12 19:34:24.765318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60745 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60745 /var/tmp/spdk2.sock 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60745 ']' 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.116 19:34:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.116 [2024-12-12 19:34:25.759412] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:43.116 [2024-12-12 19:34:25.759560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:05:43.116 [2024-12-12 19:34:25.930445] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.116 [2024-12-12 19:34:25.930517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.374 [2024-12-12 19:34:26.180744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60729 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60729 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60729 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60729 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.952 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60729 00:05:46.210 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.210 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.210 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60729' 00:05:46.210 killing process with pid 60729 00:05:46.210 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60729 00:05:46.210 19:34:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60729 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60745 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60745 ']' 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60745 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60745 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.473 killing process with pid 60745 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60745' 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60745 00:05:51.473 19:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60745 00:05:53.375 00:05:53.375 real 0m11.485s 00:05:53.375 user 0m11.719s 00:05:53.375 sys 0m1.238s 00:05:53.375 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.375 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.375 ************************************ 00:05:53.375 END TEST non_locking_app_on_locked_coremask 00:05:53.375 ************************************ 00:05:53.375 19:34:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.375 19:34:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.375 19:34:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.375 19:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.375 ************************************ 00:05:53.375 START TEST locking_app_on_unlocked_coremask 00:05:53.375 ************************************ 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60895 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60895 /var/tmp/spdk.sock 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60895 ']' 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.375 19:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.375 [2024-12-12 19:34:36.034952] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:53.375 [2024-12-12 19:34:36.035079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60895 ] 00:05:53.375 [2024-12-12 19:34:36.192357] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.375 [2024-12-12 19:34:36.192407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.634 [2024-12-12 19:34:36.306508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60911 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60911 /var/tmp/spdk2.sock 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60911 ']' 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.569 19:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.569 [2024-12-12 19:34:37.262122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:54.569 [2024-12-12 19:34:37.262276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:05:54.827 [2024-12-12 19:34:37.435683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.827 [2024-12-12 19:34:37.667267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.361 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.361 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.361 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60911 00:05:57.361 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60911 00:05:57.361 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60895 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60895 ']' 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60895 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60895 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.361 killing process with pid 60895 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60895' 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60895 00:05:57.361 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60895 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60911 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60911 ']' 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60911 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60911 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.663 killing process with pid 60911 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60911' 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60911 00:06:02.663 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60911 00:06:04.565 00:06:04.565 real 0m11.103s 00:06:04.565 user 0m11.316s 00:06:04.565 sys 0m1.102s 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.565 ************************************ 00:06:04.565 END TEST locking_app_on_unlocked_coremask 00:06:04.565 ************************************ 00:06:04.565 19:34:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.565 19:34:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.565 19:34:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.565 19:34:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.565 ************************************ 00:06:04.565 START TEST locking_app_on_locked_coremask 00:06:04.565 ************************************ 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61057 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61057 /var/tmp/spdk.sock 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61057 ']' 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.565 19:34:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.565 [2024-12-12 19:34:47.193282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:04.565 [2024-12-12 19:34:47.193908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:06:04.565 [2024-12-12 19:34:47.367649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.823 [2024-12-12 19:34:47.482001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61078 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61078 /var/tmp/spdk2.sock 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61078 /var/tmp/spdk2.sock 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61078 /var/tmp/spdk2.sock 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61078 ']' 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.760 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.760 [2024-12-12 19:34:48.447368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.760 [2024-12-12 19:34:48.447845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61078 ] 00:06:06.018 [2024-12-12 19:34:48.616837] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61057 has claimed it. 00:06:06.018 [2024-12-12 19:34:48.616909] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.276 ERROR: process (pid: 61078) is no longer running 00:06:06.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61078) - No such process 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61057 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61057 00:06:06.276 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61057 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61057 ']' 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61057 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61057 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.842 killing process with pid 61057 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61057' 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61057 00:06:06.842 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61057 00:06:09.374 00:06:09.374 real 0m4.721s 00:06:09.374 user 0m4.899s 00:06:09.374 sys 0m0.790s 00:06:09.374 19:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.374 19:34:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.374 ************************************ 00:06:09.374 END TEST locking_app_on_locked_coremask 00:06:09.374 ************************************ 00:06:09.374 19:34:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.374 19:34:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.374 19:34:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.374 19:34:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.374 ************************************ 00:06:09.374 START TEST locking_overlapped_coremask 00:06:09.374 ************************************ 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61142 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61142 /var/tmp/spdk.sock 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61142 ']' 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.374 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.374 [2024-12-12 19:34:51.982537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:09.374 [2024-12-12 19:34:51.982664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:06:09.374 [2024-12-12 19:34:52.157884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.633 [2024-12-12 19:34:52.279019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.633 [2024-12-12 19:34:52.279162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.633 [2024-12-12 19:34:52.279213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61160 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61160 /var/tmp/spdk2.sock 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61160 /var/tmp/spdk2.sock 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61160 /var/tmp/spdk2.sock 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61160 ']' 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.583 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.583 [2024-12-12 19:34:53.257370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:10.583 [2024-12-12 19:34:53.257937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61160 ] 00:06:10.841 [2024-12-12 19:34:53.426563] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61142 has claimed it. 00:06:10.841 [2024-12-12 19:34:53.430664] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.100 ERROR: process (pid: 61160) is no longer running 00:06:11.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61160) - No such process 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.100 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61142 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61142 ']' 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61142 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61142 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61142' 00:06:11.101 killing process with pid 61142 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61142 00:06:11.101 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61142 00:06:13.640 00:06:13.640 real 0m4.468s 00:06:13.640 user 0m12.182s 00:06:13.640 sys 0m0.575s 00:06:13.640 19:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.640 19:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 ************************************ 00:06:13.640 END TEST locking_overlapped_coremask 00:06:13.640 ************************************ 00:06:13.640 19:34:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.640 19:34:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.640 19:34:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.640 19:34:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 ************************************ 00:06:13.640 START TEST locking_overlapped_coremask_via_rpc 00:06:13.640 ************************************ 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61230 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61230 /var/tmp/spdk.sock 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61230 ']' 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.641 19:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.899 [2024-12-12 19:34:56.513282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:13.899 [2024-12-12 19:34:56.513424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:06:13.899 [2024-12-12 19:34:56.687852] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.899 [2024-12-12 19:34:56.687912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.158 [2024-12-12 19:34:56.811326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.158 [2024-12-12 19:34:56.811469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.158 [2024-12-12 19:34:56.811518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.094 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.094 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61248 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61248 /var/tmp/spdk2.sock 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61248 ']' 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.095 19:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.095 [2024-12-12 19:34:57.751061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:15.095 [2024-12-12 19:34:57.751583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:06:15.095 [2024-12-12 19:34:57.927677] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.095 [2024-12-12 19:34:57.927757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.353 [2024-12-12 19:34:58.176969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.353 [2024-12-12 19:34:58.177140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.353 [2024-12-12 19:34:58.177180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 [2024-12-12 19:35:00.347830] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61230 has claimed it. 00:06:17.884 request: 00:06:17.884 { 00:06:17.884 "method": "framework_enable_cpumask_locks", 00:06:17.884 "req_id": 1 00:06:17.884 } 00:06:17.884 Got JSON-RPC error response 00:06:17.884 response: 00:06:17.884 { 00:06:17.884 "code": -32603, 00:06:17.884 "message": "Failed to claim CPU core: 2" 00:06:17.884 } 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61230 /var/tmp/spdk.sock 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61230 ']' 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61248 /var/tmp/spdk2.sock 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61248 ']' 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.884 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.143 00:06:18.143 real 0m4.380s 00:06:18.143 user 0m1.277s 00:06:18.143 sys 0m0.214s 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.143 19:35:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 ************************************ 00:06:18.143 END TEST locking_overlapped_coremask_via_rpc 00:06:18.143 ************************************ 00:06:18.143 19:35:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.143 19:35:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61230 ]] 00:06:18.143 19:35:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61230 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61230 ']' 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61230 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61230 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.143 killing process with pid 61230 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61230' 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61230 00:06:18.143 19:35:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61230 00:06:20.676 19:35:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61248 ]] 00:06:20.676 19:35:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61248 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61248 ']' 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61248 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61248 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61248' 00:06:20.676 killing process with pid 61248 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61248 00:06:20.676 19:35:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61248 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61230 ]] 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61230 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61230 ']' 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61230 00:06:23.216 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61230) - No such process 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61230 is not found' 00:06:23.216 Process with pid 61230 is not found 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61248 ]] 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61248 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61248 ']' 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61248 00:06:23.216 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61248) - No such process 00:06:23.216 Process with pid 61248 is not found 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61248 is not found' 00:06:23.216 19:35:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.216 00:06:23.216 real 0m49.877s 00:06:23.216 user 1m26.065s 00:06:23.216 sys 0m6.383s 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.216 19:35:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.216 ************************************ 00:06:23.216 END TEST cpu_locks 00:06:23.216 ************************************ 00:06:23.216 00:06:23.216 real 1m20.106s 00:06:23.216 user 2m24.328s 00:06:23.216 sys 0m10.321s 00:06:23.216 19:35:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.216 19:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.216 ************************************ 00:06:23.216 END TEST event 00:06:23.216 ************************************ 00:06:23.216 19:35:06 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.216 19:35:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.216 19:35:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.216 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:06:23.216 ************************************ 00:06:23.216 START TEST thread 00:06:23.216 ************************************ 00:06:23.216 19:35:06 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.476 * Looking for test storage... 00:06:23.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.476 19:35:06 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.476 19:35:06 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.476 19:35:06 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.476 19:35:06 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.476 19:35:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.476 19:35:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.476 19:35:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.476 19:35:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.476 19:35:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.476 19:35:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.476 19:35:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.476 19:35:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.476 19:35:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.476 19:35:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.476 19:35:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.476 19:35:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.476 19:35:06 thread -- scripts/common.sh@345 -- # : 1 00:06:23.476 19:35:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.476 19:35:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.476 19:35:06 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.476 19:35:06 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.476 19:35:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.476 19:35:06 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.476 19:35:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.476 19:35:06 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.476 19:35:06 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.476 19:35:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.477 19:35:06 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.477 19:35:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.477 19:35:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.477 19:35:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.477 19:35:06 thread -- scripts/common.sh@368 -- # return 0 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.477 --rc genhtml_branch_coverage=1 00:06:23.477 --rc genhtml_function_coverage=1 00:06:23.477 --rc genhtml_legend=1 00:06:23.477 --rc geninfo_all_blocks=1 00:06:23.477 --rc geninfo_unexecuted_blocks=1 00:06:23.477 00:06:23.477 ' 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.477 --rc genhtml_branch_coverage=1 00:06:23.477 --rc genhtml_function_coverage=1 00:06:23.477 --rc genhtml_legend=1 00:06:23.477 --rc geninfo_all_blocks=1 00:06:23.477 --rc geninfo_unexecuted_blocks=1 00:06:23.477 00:06:23.477 ' 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.477 --rc genhtml_branch_coverage=1 00:06:23.477 --rc genhtml_function_coverage=1 00:06:23.477 --rc genhtml_legend=1 00:06:23.477 --rc geninfo_all_blocks=1 00:06:23.477 --rc geninfo_unexecuted_blocks=1 00:06:23.477 00:06:23.477 ' 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.477 --rc genhtml_branch_coverage=1 00:06:23.477 --rc genhtml_function_coverage=1 00:06:23.477 --rc genhtml_legend=1 00:06:23.477 --rc geninfo_all_blocks=1 00:06:23.477 --rc geninfo_unexecuted_blocks=1 00:06:23.477 00:06:23.477 ' 00:06:23.477 19:35:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.477 19:35:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.477 ************************************ 00:06:23.477 START TEST thread_poller_perf 00:06:23.477 ************************************ 00:06:23.477 19:35:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.477 [2024-12-12 19:35:06.297430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:23.477 [2024-12-12 19:35:06.297611] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:06:23.740 [2024-12-12 19:35:06.469833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.019 [2024-12-12 19:35:06.585844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.019 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.964 [2024-12-12T19:35:07.809Z] ====================================== 00:06:24.964 [2024-12-12T19:35:07.809Z] busy:2301213746 (cyc) 00:06:24.964 [2024-12-12T19:35:07.809Z] total_run_count: 393000 00:06:24.964 [2024-12-12T19:35:07.809Z] tsc_hz: 2290000000 (cyc) 00:06:24.964 [2024-12-12T19:35:07.809Z] ====================================== 00:06:24.964 [2024-12-12T19:35:07.809Z] poller_cost: 5855 (cyc), 2556 (nsec) 00:06:25.226 00:06:25.226 real 0m1.567s 00:06:25.226 user 0m1.368s 00:06:25.226 sys 0m0.092s 00:06:25.226 19:35:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.226 19:35:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.226 ************************************ 00:06:25.226 END TEST thread_poller_perf 00:06:25.226 ************************************ 00:06:25.226 19:35:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.226 19:35:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.226 19:35:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.226 19:35:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.226 ************************************ 00:06:25.226 START TEST thread_poller_perf 00:06:25.226 ************************************ 00:06:25.226 19:35:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.226 [2024-12-12 19:35:07.932411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:25.226 [2024-12-12 19:35:07.932596] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61485 ] 00:06:25.485 [2024-12-12 19:35:08.107278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.485 [2024-12-12 19:35:08.225520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.485 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.863 [2024-12-12T19:35:09.708Z] ====================================== 00:06:26.863 [2024-12-12T19:35:09.708Z] busy:2293443542 (cyc) 00:06:26.863 [2024-12-12T19:35:09.708Z] total_run_count: 4701000 00:06:26.863 [2024-12-12T19:35:09.708Z] tsc_hz: 2290000000 (cyc) 00:06:26.863 [2024-12-12T19:35:09.708Z] ====================================== 00:06:26.863 [2024-12-12T19:35:09.708Z] poller_cost: 487 (cyc), 212 (nsec) 00:06:26.863 00:06:26.863 real 0m1.564s 00:06:26.863 user 0m1.365s 00:06:26.863 sys 0m0.091s 00:06:26.863 19:35:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.863 19:35:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.863 ************************************ 00:06:26.863 END TEST thread_poller_perf 00:06:26.863 ************************************ 00:06:26.863 19:35:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.863 00:06:26.863 real 0m3.476s 00:06:26.863 user 0m2.899s 00:06:26.863 sys 0m0.377s 00:06:26.863 19:35:09 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.863 ************************************ 00:06:26.863 END TEST thread 00:06:26.863 ************************************ 00:06:26.863 19:35:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.863 19:35:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.863 19:35:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.863 19:35:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.863 19:35:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.863 19:35:09 -- common/autotest_common.sh@10 -- # set +x 00:06:26.863 ************************************ 00:06:26.863 START TEST app_cmdline 00:06:26.863 ************************************ 00:06:26.863 19:35:09 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.863 * Looking for test storage... 00:06:26.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.863 19:35:09 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.863 19:35:09 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.863 19:35:09 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.123 19:35:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.123 --rc genhtml_branch_coverage=1 00:06:27.123 --rc genhtml_function_coverage=1 00:06:27.123 --rc genhtml_legend=1 00:06:27.123 --rc geninfo_all_blocks=1 00:06:27.123 --rc geninfo_unexecuted_blocks=1 00:06:27.123 00:06:27.123 ' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.123 --rc genhtml_branch_coverage=1 00:06:27.123 --rc genhtml_function_coverage=1 00:06:27.123 --rc genhtml_legend=1 00:06:27.123 --rc geninfo_all_blocks=1 00:06:27.123 --rc geninfo_unexecuted_blocks=1 00:06:27.123 00:06:27.123 ' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.123 --rc genhtml_branch_coverage=1 00:06:27.123 --rc genhtml_function_coverage=1 00:06:27.123 --rc genhtml_legend=1 00:06:27.123 --rc geninfo_all_blocks=1 00:06:27.123 --rc geninfo_unexecuted_blocks=1 00:06:27.123 00:06:27.123 ' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.123 --rc genhtml_branch_coverage=1 00:06:27.123 --rc genhtml_function_coverage=1 00:06:27.123 --rc genhtml_legend=1 00:06:27.123 --rc geninfo_all_blocks=1 00:06:27.123 --rc geninfo_unexecuted_blocks=1 00:06:27.123 00:06:27.123 ' 00:06:27.123 19:35:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.123 19:35:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61574 00:06:27.123 19:35:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.123 19:35:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61574 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61574 ']' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.123 19:35:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.123 [2024-12-12 19:35:09.883172] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:27.123 [2024-12-12 19:35:09.883295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61574 ] 00:06:27.383 [2024-12-12 19:35:10.058030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.383 [2024-12-12 19:35:10.171376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.319 19:35:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.319 19:35:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.319 19:35:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.578 { 00:06:28.578 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:28.578 "fields": { 00:06:28.578 "major": 25, 00:06:28.578 "minor": 1, 00:06:28.578 "patch": 0, 00:06:28.578 "suffix": "-pre", 00:06:28.578 "commit": "e01cb43b8" 00:06:28.579 } 00:06:28.579 } 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.579 19:35:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.579 19:35:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.839 request: 00:06:28.839 { 00:06:28.839 "method": "env_dpdk_get_mem_stats", 00:06:28.839 "req_id": 1 00:06:28.839 } 00:06:28.839 Got JSON-RPC error response 00:06:28.839 response: 00:06:28.839 { 00:06:28.839 "code": -32601, 00:06:28.839 "message": "Method not found" 00:06:28.839 } 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.839 19:35:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61574 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61574 ']' 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61574 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61574 00:06:28.839 killing process with pid 61574 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61574' 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 61574 00:06:28.839 19:35:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 61574 00:06:31.384 00:06:31.384 real 0m4.301s 00:06:31.384 user 0m4.515s 00:06:31.384 sys 0m0.590s 00:06:31.384 19:35:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.384 19:35:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.384 ************************************ 00:06:31.384 END TEST app_cmdline 00:06:31.384 ************************************ 00:06:31.384 19:35:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.384 19:35:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.384 19:35:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.384 19:35:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.384 ************************************ 00:06:31.384 START TEST version 00:06:31.384 ************************************ 00:06:31.384 19:35:13 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.384 * Looking for test storage... 00:06:31.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.384 19:35:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.384 19:35:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.384 19:35:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.384 19:35:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.384 19:35:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.384 19:35:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.384 19:35:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.384 19:35:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.384 19:35:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.384 19:35:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.384 19:35:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.384 19:35:14 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.384 19:35:14 version -- scripts/common.sh@345 -- # : 1 00:06:31.384 19:35:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.384 19:35:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.384 19:35:14 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.384 19:35:14 version -- scripts/common.sh@353 -- # local d=1 00:06:31.384 19:35:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.384 19:35:14 version -- scripts/common.sh@355 -- # echo 1 00:06:31.384 19:35:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.384 19:35:14 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.384 19:35:14 version -- scripts/common.sh@353 -- # local d=2 00:06:31.384 19:35:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.384 19:35:14 version -- scripts/common.sh@355 -- # echo 2 00:06:31.384 19:35:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.384 19:35:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.384 19:35:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.384 19:35:14 version -- scripts/common.sh@368 -- # return 0 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.384 --rc genhtml_branch_coverage=1 00:06:31.384 --rc genhtml_function_coverage=1 00:06:31.384 --rc genhtml_legend=1 00:06:31.384 --rc geninfo_all_blocks=1 00:06:31.384 --rc geninfo_unexecuted_blocks=1 00:06:31.384 00:06:31.384 ' 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.384 --rc genhtml_branch_coverage=1 00:06:31.384 --rc genhtml_function_coverage=1 00:06:31.384 --rc genhtml_legend=1 00:06:31.384 --rc geninfo_all_blocks=1 00:06:31.384 --rc geninfo_unexecuted_blocks=1 00:06:31.384 00:06:31.384 ' 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.384 --rc genhtml_branch_coverage=1 00:06:31.384 --rc genhtml_function_coverage=1 00:06:31.384 --rc genhtml_legend=1 00:06:31.384 --rc geninfo_all_blocks=1 00:06:31.384 --rc geninfo_unexecuted_blocks=1 00:06:31.384 00:06:31.384 ' 00:06:31.384 19:35:14 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.384 --rc genhtml_branch_coverage=1 00:06:31.384 --rc genhtml_function_coverage=1 00:06:31.384 --rc genhtml_legend=1 00:06:31.384 --rc geninfo_all_blocks=1 00:06:31.384 --rc geninfo_unexecuted_blocks=1 00:06:31.384 00:06:31.384 ' 00:06:31.384 19:35:14 version -- app/version.sh@17 -- # get_header_version major 00:06:31.384 19:35:14 version -- app/version.sh@14 -- # cut -f2 00:06:31.384 19:35:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.384 19:35:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.384 19:35:14 version -- app/version.sh@17 -- # major=25 00:06:31.384 19:35:14 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.385 19:35:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # cut -f2 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.385 19:35:14 version -- app/version.sh@18 -- # minor=1 00:06:31.385 19:35:14 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.385 19:35:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # cut -f2 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.385 19:35:14 version -- app/version.sh@19 -- # patch=0 00:06:31.385 19:35:14 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.385 19:35:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # cut -f2 00:06:31.385 19:35:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.385 19:35:14 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.385 19:35:14 version -- app/version.sh@22 -- # version=25.1 00:06:31.385 19:35:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.385 19:35:14 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.385 19:35:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.385 19:35:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.644 19:35:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.644 19:35:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.644 ************************************ 00:06:31.644 END TEST version 00:06:31.644 ************************************ 00:06:31.644 00:06:31.644 real 0m0.327s 00:06:31.644 user 0m0.201s 00:06:31.644 sys 0m0.180s 00:06:31.644 19:35:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.644 19:35:14 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.644 19:35:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.644 19:35:14 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:31.644 19:35:14 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.644 19:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.644 19:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.644 19:35:14 -- common/autotest_common.sh@10 -- # set +x 00:06:31.644 ************************************ 00:06:31.644 START TEST bdev_raid 00:06:31.644 ************************************ 00:06:31.644 19:35:14 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.644 * Looking for test storage... 00:06:31.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.644 19:35:14 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.644 19:35:14 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.644 19:35:14 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.904 19:35:14 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.904 --rc genhtml_branch_coverage=1 00:06:31.904 --rc genhtml_function_coverage=1 00:06:31.904 --rc genhtml_legend=1 00:06:31.904 --rc geninfo_all_blocks=1 00:06:31.904 --rc geninfo_unexecuted_blocks=1 00:06:31.904 00:06:31.904 ' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.904 --rc genhtml_branch_coverage=1 00:06:31.904 --rc genhtml_function_coverage=1 00:06:31.904 --rc genhtml_legend=1 00:06:31.904 --rc geninfo_all_blocks=1 00:06:31.904 --rc geninfo_unexecuted_blocks=1 00:06:31.904 00:06:31.904 ' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.904 --rc genhtml_branch_coverage=1 00:06:31.904 --rc genhtml_function_coverage=1 00:06:31.904 --rc genhtml_legend=1 00:06:31.904 --rc geninfo_all_blocks=1 00:06:31.904 --rc geninfo_unexecuted_blocks=1 00:06:31.904 00:06:31.904 ' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.904 --rc genhtml_branch_coverage=1 00:06:31.904 --rc genhtml_function_coverage=1 00:06:31.904 --rc genhtml_legend=1 00:06:31.904 --rc geninfo_all_blocks=1 00:06:31.904 --rc geninfo_unexecuted_blocks=1 00:06:31.904 00:06:31.904 ' 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.904 19:35:14 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:31.904 19:35:14 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.904 19:35:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.904 ************************************ 00:06:31.904 START TEST raid1_resize_data_offset_test 00:06:31.904 ************************************ 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=61756 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.904 Process raid pid: 61756 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 61756' 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 61756 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 61756 ']' 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.904 19:35:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.904 [2024-12-12 19:35:14.669244] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:31.904 [2024-12-12 19:35:14.669487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.164 [2024-12-12 19:35:14.843406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.164 [2024-12-12 19:35:14.959330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.422 [2024-12-12 19:35:15.161699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.422 [2024-12-12 19:35:15.161826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.681 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.681 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:32.681 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:32.681 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.681 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.940 malloc0 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.940 malloc1 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.940 null0 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.940 [2024-12-12 19:35:15.678319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:32.940 [2024-12-12 19:35:15.680142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:32.940 [2024-12-12 19:35:15.680241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:32.940 [2024-12-12 19:35:15.680416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.940 [2024-12-12 19:35:15.680474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:32.940 [2024-12-12 19:35:15.680738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:32.940 [2024-12-12 19:35:15.680936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.940 [2024-12-12 19:35:15.680983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:32.940 [2024-12-12 19:35:15.681157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.940 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.941 [2024-12-12 19:35:15.738209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.941 19:35:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.513 malloc2 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.513 [2024-12-12 19:35:16.280215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:33.513 [2024-12-12 19:35:16.296333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.513 [2024-12-12 19:35:16.298074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 61756 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 61756 ']' 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 61756 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.513 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61756 00:06:33.774 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.774 killing process with pid 61756 00:06:33.774 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.774 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61756' 00:06:33.774 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 61756 00:06:33.774 [2024-12-12 19:35:16.387426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.774 19:35:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 61756 00:06:33.774 [2024-12-12 19:35:16.388793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:33.774 [2024-12-12 19:35:16.388868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.774 [2024-12-12 19:35:16.388888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:33.774 [2024-12-12 19:35:16.423588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.774 [2024-12-12 19:35:16.423920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.774 [2024-12-12 19:35:16.423936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.685 [2024-12-12 19:35:18.134081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.634 ************************************ 00:06:36.634 END TEST raid1_resize_data_offset_test 00:06:36.634 ************************************ 00:06:36.634 19:35:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:36.634 00:06:36.634 real 0m4.656s 00:06:36.634 user 0m4.568s 00:06:36.634 sys 0m0.517s 00:06:36.634 19:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.634 19:35:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.634 19:35:19 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:36.634 19:35:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.634 19:35:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.634 19:35:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.634 ************************************ 00:06:36.634 START TEST raid0_resize_superblock_test 00:06:36.634 ************************************ 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61845 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61845' 00:06:36.634 Process raid pid: 61845 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61845 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61845 ']' 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.634 19:35:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.634 [2024-12-12 19:35:19.382652] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:36.634 [2024-12-12 19:35:19.382881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.894 [2024-12-12 19:35:19.554143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.894 [2024-12-12 19:35:19.670360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.154 [2024-12-12 19:35:19.873639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.154 [2024-12-12 19:35:19.873681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.414 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.414 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.414 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:37.414 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.414 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 malloc0 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 [2024-12-12 19:35:20.735997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.983 [2024-12-12 19:35:20.736079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.983 [2024-12-12 19:35:20.736110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:37.983 [2024-12-12 19:35:20.736126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.983 [2024-12-12 19:35:20.738399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.983 [2024-12-12 19:35:20.738484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.983 pt0 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.983 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.243 d3f3eaa9-68e0-4555-9ee8-9e350ddee3a8 00:06:38.243 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.243 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:38.243 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.243 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.243 fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b 00:06:38.243 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 e4230c1c-6bf6-44a2-8424-2074a991dfdf 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 [2024-12-12 19:35:20.865939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b is claimed 00:06:38.244 [2024-12-12 19:35:20.866030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e4230c1c-6bf6-44a2-8424-2074a991dfdf is claimed 00:06:38.244 [2024-12-12 19:35:20.866148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:38.244 [2024-12-12 19:35:20.866163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:38.244 [2024-12-12 19:35:20.866411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:38.244 [2024-12-12 19:35:20.866630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:38.244 [2024-12-12 19:35:20.866654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:38.244 [2024-12-12 19:35:20.866790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 [2024-12-12 19:35:20.977961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.244 19:35:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 [2024-12-12 19:35:21.021921] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.244 [2024-12-12 19:35:21.021950] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b' was resized: old size 131072, new size 204800 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 [2024-12-12 19:35:21.033793] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.244 [2024-12-12 19:35:21.033859] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e4230c1c-6bf6-44a2-8424-2074a991dfdf' was resized: old size 131072, new size 204800 00:06:38.244 [2024-12-12 19:35:21.033915] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.244 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 [2024-12-12 19:35:21.137725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 [2024-12-12 19:35:21.185491] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:38.505 [2024-12-12 19:35:21.185662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:38.505 [2024-12-12 19:35:21.185733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.505 [2024-12-12 19:35:21.185781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:38.505 [2024-12-12 19:35:21.185955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.505 [2024-12-12 19:35:21.186040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.505 [2024-12-12 19:35:21.186095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 [2024-12-12 19:35:21.197355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.505 [2024-12-12 19:35:21.197440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.505 [2024-12-12 19:35:21.197475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:38.505 [2024-12-12 19:35:21.197505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.505 [2024-12-12 19:35:21.199998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.505 [2024-12-12 19:35:21.200091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.505 [2024-12-12 19:35:21.202019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b 00:06:38.505 [2024-12-12 19:35:21.202178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b is claimed 00:06:38.505 [2024-12-12 19:35:21.202376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e4230c1c-6bf6-44a2-8424-2074a991dfdf 00:06:38.505 [2024-12-12 19:35:21.202481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e4230c1c-6bf6-44a2-8424-2074a991dfdf is claimed 00:06:38.505 [2024-12-12 19:35:21.202744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e4230c1c-6bf6-44a2-8424-2074a991dfdf (2) smaller than existing raid bdev Raid (3) 00:06:38.505 [2024-12-12 19:35:21.202840] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fc5e8b4c-eff1-4fff-89d8-38ee4c92f17b: File exists 00:06:38.505 [2024-12-12 19:35:21.202937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:38.505 [2024-12-12 19:35:21.202975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:38.505 pt0 00:06:38.505 [2024-12-12 19:35:21.203277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:38.505 [2024-12-12 19:35:21.203478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:38.505 [2024-12-12 19:35:21.203532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:38.505 [2024-12-12 19:35:21.203790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 [2024-12-12 19:35:21.226254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61845 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61845 ']' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61845 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61845 00:06:38.505 killing process with pid 61845 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61845' 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61845 00:06:38.505 [2024-12-12 19:35:21.286098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.505 [2024-12-12 19:35:21.286160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.505 [2024-12-12 19:35:21.286200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.505 [2024-12-12 19:35:21.286208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.505 19:35:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61845 00:06:39.885 [2024-12-12 19:35:22.668753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.266 ************************************ 00:06:41.266 END TEST raid0_resize_superblock_test 00:06:41.266 ************************************ 00:06:41.266 19:35:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:41.266 00:06:41.266 real 0m4.476s 00:06:41.266 user 0m4.641s 00:06:41.266 sys 0m0.577s 00:06:41.266 19:35:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.266 19:35:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.266 19:35:23 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:41.266 19:35:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.266 19:35:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.266 19:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.266 ************************************ 00:06:41.266 START TEST raid1_resize_superblock_test 00:06:41.266 ************************************ 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61938 00:06:41.266 Process raid pid: 61938 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61938' 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61938 00:06:41.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61938 ']' 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.266 19:35:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.266 [2024-12-12 19:35:23.917818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:41.266 [2024-12-12 19:35:23.917929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.266 [2024-12-12 19:35:24.089784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.525 [2024-12-12 19:35:24.203519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.785 [2024-12-12 19:35:24.412996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.785 [2024-12-12 19:35:24.413078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.044 19:35:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.044 19:35:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.044 19:35:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:42.044 19:35:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.044 19:35:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 malloc0 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 [2024-12-12 19:35:25.310738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:42.612 [2024-12-12 19:35:25.310796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.612 [2024-12-12 19:35:25.310819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:42.612 [2024-12-12 19:35:25.310832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.612 [2024-12-12 19:35:25.312961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.612 pt0 00:06:42.612 [2024-12-12 19:35:25.313086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 32d3e07b-56f0-44e4-a8d0-db0491bf45d2 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 b021d696-88de-4704-bd8c-be84d36b3b1a 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 584c5d0d-718f-4de1-8b02-4412064d6c1e 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.612 [2024-12-12 19:35:25.445057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b021d696-88de-4704-bd8c-be84d36b3b1a is claimed 00:06:42.612 [2024-12-12 19:35:25.445149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 584c5d0d-718f-4de1-8b02-4412064d6c1e is claimed 00:06:42.612 [2024-12-12 19:35:25.445310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.612 [2024-12-12 19:35:25.445362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:42.612 [2024-12-12 19:35:25.445668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.612 [2024-12-12 19:35:25.445918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.612 [2024-12-12 19:35:25.445935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.612 [2024-12-12 19:35:25.446110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.612 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:42.872 [2024-12-12 19:35:25.557070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 [2024-12-12 19:35:25.604967] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.872 [2024-12-12 19:35:25.605038] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b021d696-88de-4704-bd8c-be84d36b3b1a' was resized: old size 131072, new size 204800 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 [2024-12-12 19:35:25.616919] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.872 [2024-12-12 19:35:25.616944] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '584c5d0d-718f-4de1-8b02-4412064d6c1e' was resized: old size 131072, new size 204800 00:06:42.872 [2024-12-12 19:35:25.616973] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.872 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:43.132 [2024-12-12 19:35:25.728773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.132 [2024-12-12 19:35:25.772515] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.132 [2024-12-12 19:35:25.772618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.132 [2024-12-12 19:35:25.772646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.132 [2024-12-12 19:35:25.772812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.132 [2024-12-12 19:35:25.773036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.132 [2024-12-12 19:35:25.773117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.132 [2024-12-12 19:35:25.773132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.132 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.132 [2024-12-12 19:35:25.784407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.132 [2024-12-12 19:35:25.784507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.132 [2024-12-12 19:35:25.784541] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.132 [2024-12-12 19:35:25.784591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.132 [2024-12-12 19:35:25.786806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.132 [2024-12-12 19:35:25.786884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.132 [2024-12-12 19:35:25.788632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b021d696-88de-4704-bd8c-be84d36b3b1a 00:06:43.132 [2024-12-12 19:35:25.788753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b021d696-88de-4704-bd8c-be84d36b3b1a is claimed 00:06:43.133 [2024-12-12 19:35:25.788927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 584c5d0d-718f-4de1-8b02-4412064d6c1e 00:06:43.133 [2024-12-12 19:35:25.788994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 584c5d0d-718f-4de1-8b02-4412064d6c1e is claimed 00:06:43.133 [2024-12-12 19:35:25.789229] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 584c5d0d-718f-4de1-8b02-4412064d6c1e (2) smaller than existing raid bdev Raid (3) 00:06:43.133 [2024-12-12 19:35:25.789322] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b021d696-88de-4704-bd8c-be84d36b3b1a: File exists 00:06:43.133 [2024-12-12 19:35:25.789415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:43.133 [2024-12-12 19:35:25.789453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:43.133 [2024-12-12 19:35:25.789743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:43.133 pt0 00:06:43.133 [2024-12-12 19:35:25.789952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:43.133 [2024-12-12 19:35:25.789967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:43.133 [2024-12-12 19:35:25.790127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:43.133 [2024-12-12 19:35:25.808863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61938 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61938 ']' 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61938 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61938 00:06:43.133 killing process with pid 61938 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61938' 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61938 00:06:43.133 [2024-12-12 19:35:25.884098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.133 [2024-12-12 19:35:25.884166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.133 [2024-12-12 19:35:25.884213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.133 [2024-12-12 19:35:25.884223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.133 19:35:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61938 00:06:44.512 [2024-12-12 19:35:27.277905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.893 19:35:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:45.893 00:06:45.893 real 0m4.545s 00:06:45.893 user 0m4.787s 00:06:45.893 sys 0m0.540s 00:06:45.893 19:35:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.893 ************************************ 00:06:45.893 END TEST raid1_resize_superblock_test 00:06:45.893 ************************************ 00:06:45.893 19:35:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.893 19:35:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:45.893 19:35:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:45.894 19:35:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:45.894 19:35:28 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:45.894 19:35:28 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:45.894 19:35:28 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:45.894 19:35:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.894 19:35:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.894 19:35:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.894 ************************************ 00:06:45.894 START TEST raid_function_test_raid0 00:06:45.894 ************************************ 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=62041 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62041' 00:06:45.894 Process raid pid: 62041 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 62041 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 62041 ']' 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.894 19:35:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.894 [2024-12-12 19:35:28.555972] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:45.894 [2024-12-12 19:35:28.556183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.894 [2024-12-12 19:35:28.731982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.153 [2024-12-12 19:35:28.844744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.413 [2024-12-12 19:35:29.044430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.413 [2024-12-12 19:35:29.044578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.673 Base_1 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.673 Base_2 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.673 [2024-12-12 19:35:29.484417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.673 [2024-12-12 19:35:29.486234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.673 [2024-12-12 19:35:29.486300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.673 [2024-12-12 19:35:29.486311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.673 [2024-12-12 19:35:29.486561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.673 [2024-12-12 19:35:29.486711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.673 [2024-12-12 19:35:29.486719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.673 [2024-12-12 19:35:29.486858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.673 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:46.933 [2024-12-12 19:35:29.724068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:46.933 /dev/nbd0 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.933 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.193 1+0 records in 00:06:47.193 1+0 records out 00:06:47.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401787 s, 10.2 MB/s 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.193 19:35:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.193 { 00:06:47.193 "nbd_device": "/dev/nbd0", 00:06:47.193 "bdev_name": "raid" 00:06:47.193 } 00:06:47.193 ]' 00:06:47.193 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.193 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.193 { 00:06:47.193 "nbd_device": "/dev/nbd0", 00:06:47.193 "bdev_name": "raid" 00:06:47.193 } 00:06:47.193 ]' 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:47.468 4096+0 records in 00:06:47.468 4096+0 records out 00:06:47.468 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341985 s, 61.3 MB/s 00:06:47.468 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:47.736 4096+0 records in 00:06:47.736 4096+0 records out 00:06:47.736 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.196152 s, 10.7 MB/s 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:47.736 128+0 records in 00:06:47.736 128+0 records out 00:06:47.736 65536 bytes (66 kB, 64 KiB) copied, 0.00127828 s, 51.3 MB/s 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:47.736 2035+0 records in 00:06:47.736 2035+0 records out 00:06:47.736 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153769 s, 67.8 MB/s 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:47.736 456+0 records in 00:06:47.736 456+0 records out 00:06:47.736 233472 bytes (233 kB, 228 KiB) copied, 0.00242082 s, 96.4 MB/s 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.736 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.996 [2024-12-12 19:35:30.638825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.996 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 62041 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 62041 ']' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 62041 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62041 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.256 killing process with pid 62041 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62041' 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 62041 00:06:48.256 [2024-12-12 19:35:30.950866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.256 19:35:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 62041 00:06:48.256 [2024-12-12 19:35:30.951086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.256 [2024-12-12 19:35:30.951145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.256 [2024-12-12 19:35:30.951161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:48.516 [2024-12-12 19:35:31.153718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.454 19:35:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.454 00:06:49.454 real 0m3.780s 00:06:49.454 user 0m4.395s 00:06:49.454 sys 0m0.927s 00:06:49.454 19:35:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.454 19:35:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.454 ************************************ 00:06:49.454 END TEST raid_function_test_raid0 00:06:49.454 ************************************ 00:06:49.714 19:35:32 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.714 19:35:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.714 19:35:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.714 19:35:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.714 ************************************ 00:06:49.714 START TEST raid_function_test_concat 00:06:49.714 ************************************ 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=62172 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62172' 00:06:49.714 Process raid pid: 62172 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 62172 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 62172 ']' 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.714 19:35:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.714 [2024-12-12 19:35:32.400613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:49.714 [2024-12-12 19:35:32.400803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.974 [2024-12-12 19:35:32.572299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.974 [2024-12-12 19:35:32.685020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.234 [2024-12-12 19:35:32.886335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.234 [2024-12-12 19:35:32.886477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.494 Base_1 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.494 Base_2 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.494 [2024-12-12 19:35:33.314012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:50.494 [2024-12-12 19:35:33.315816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:50.494 [2024-12-12 19:35:33.315883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:50.494 [2024-12-12 19:35:33.315894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:50.494 [2024-12-12 19:35:33.316129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.494 [2024-12-12 19:35:33.316275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:50.494 [2024-12-12 19:35:33.316284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:50.494 [2024-12-12 19:35:33.316426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.494 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:50.754 [2024-12-12 19:35:33.533719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:50.754 /dev/nbd0 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.754 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.755 1+0 records in 00:06:50.755 1+0 records out 00:06:50.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331244 s, 12.4 MB/s 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.755 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.014 { 00:06:51.014 "nbd_device": "/dev/nbd0", 00:06:51.014 "bdev_name": "raid" 00:06:51.014 } 00:06:51.014 ]' 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.014 { 00:06:51.014 "nbd_device": "/dev/nbd0", 00:06:51.014 "bdev_name": "raid" 00:06:51.014 } 00:06:51.014 ]' 00:06:51.014 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:51.275 4096+0 records in 00:06:51.275 4096+0 records out 00:06:51.275 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0344602 s, 60.9 MB/s 00:06:51.275 19:35:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.535 4096+0 records in 00:06:51.535 4096+0 records out 00:06:51.535 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.222734 s, 9.4 MB/s 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.535 128+0 records in 00:06:51.535 128+0 records out 00:06:51.535 65536 bytes (66 kB, 64 KiB) copied, 0.00123191 s, 53.2 MB/s 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.535 2035+0 records in 00:06:51.535 2035+0 records out 00:06:51.535 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0137217 s, 75.9 MB/s 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.535 456+0 records in 00:06:51.535 456+0 records out 00:06:51.535 233472 bytes (233 kB, 228 KiB) copied, 0.00324605 s, 71.9 MB/s 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.535 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.795 [2024-12-12 19:35:34.504393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.795 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 62172 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 62172 ']' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 62172 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62172 00:06:52.056 killing process with pid 62172 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62172' 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 62172 00:06:52.056 [2024-12-12 19:35:34.828613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.056 [2024-12-12 19:35:34.828718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.056 [2024-12-12 19:35:34.828770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.056 [2024-12-12 19:35:34.828783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:52.056 19:35:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 62172 00:06:52.316 [2024-12-12 19:35:35.039445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.696 19:35:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:53.696 00:06:53.696 real 0m3.820s 00:06:53.696 user 0m4.399s 00:06:53.696 sys 0m0.967s 00:06:53.696 19:35:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.696 19:35:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 ************************************ 00:06:53.696 END TEST raid_function_test_concat 00:06:53.696 ************************************ 00:06:53.696 19:35:36 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:53.696 19:35:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.696 19:35:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.696 19:35:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.696 ************************************ 00:06:53.696 START TEST raid0_resize_test 00:06:53.696 ************************************ 00:06:53.696 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:53.697 Process raid pid: 62294 00:06:53.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62294 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62294' 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62294 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62294 ']' 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.697 19:35:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.697 [2024-12-12 19:35:36.288234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:53.697 [2024-12-12 19:35:36.288429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.697 [2024-12-12 19:35:36.442371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.956 [2024-12-12 19:35:36.555893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.956 [2024-12-12 19:35:36.757467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.956 [2024-12-12 19:35:36.757615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.526 Base_1 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.526 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.526 Base_2 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.527 [2024-12-12 19:35:37.146824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.527 [2024-12-12 19:35:37.148629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.527 [2024-12-12 19:35:37.148720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.527 [2024-12-12 19:35:37.148756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:54.527 [2024-12-12 19:35:37.149048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:54.527 [2024-12-12 19:35:37.149216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.527 [2024-12-12 19:35:37.149271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.527 [2024-12-12 19:35:37.149519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.527 [2024-12-12 19:35:37.158776] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.527 [2024-12-12 19:35:37.158818] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:54.527 true 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:54.527 [2024-12-12 19:35:37.170934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.527 [2024-12-12 19:35:37.218691] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.527 [2024-12-12 19:35:37.218753] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:54.527 [2024-12-12 19:35:37.218803] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:54.527 true 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.527 [2024-12-12 19:35:37.230819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62294 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62294 ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 62294 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62294 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62294' 00:06:54.527 killing process with pid 62294 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 62294 00:06:54.527 [2024-12-12 19:35:37.316600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.527 [2024-12-12 19:35:37.316729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.527 [2024-12-12 19:35:37.316809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.527 19:35:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 62294 00:06:54.527 [2024-12-12 19:35:37.316865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.527 [2024-12-12 19:35:37.334270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.930 19:35:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:55.930 00:06:55.930 real 0m2.243s 00:06:55.930 user 0m2.370s 00:06:55.930 sys 0m0.342s 00:06:55.930 19:35:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.930 19:35:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.930 ************************************ 00:06:55.930 END TEST raid0_resize_test 00:06:55.930 ************************************ 00:06:55.930 19:35:38 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:55.930 19:35:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.930 19:35:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.930 19:35:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.930 ************************************ 00:06:55.930 START TEST raid1_resize_test 00:06:55.930 ************************************ 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:55.930 Process raid pid: 62356 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62356 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62356' 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62356 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62356 ']' 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.930 19:35:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.930 [2024-12-12 19:35:38.595867] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:55.930 [2024-12-12 19:35:38.596075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.930 [2024-12-12 19:35:38.767477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.192 [2024-12-12 19:35:38.880135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.451 [2024-12-12 19:35:39.080985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.451 [2024-12-12 19:35:39.081100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 Base_1 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 Base_2 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 [2024-12-12 19:35:39.455095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.711 [2024-12-12 19:35:39.456944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.711 [2024-12-12 19:35:39.457040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.711 [2024-12-12 19:35:39.457078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:56.711 [2024-12-12 19:35:39.457418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.711 [2024-12-12 19:35:39.457621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.711 [2024-12-12 19:35:39.457665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.711 [2024-12-12 19:35:39.457907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 [2024-12-12 19:35:39.467062] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.711 [2024-12-12 19:35:39.467127] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:56.711 true 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 [2024-12-12 19:35:39.483206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 [2024-12-12 19:35:39.526934] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.711 [2024-12-12 19:35:39.526992] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:56.711 [2024-12-12 19:35:39.527043] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:56.711 true 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.711 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 [2024-12-12 19:35:39.543066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62356 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62356 ']' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 62356 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62356 00:06:56.971 killing process with pid 62356 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62356' 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 62356 00:06:56.971 [2024-12-12 19:35:39.625319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.971 [2024-12-12 19:35:39.625421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.971 19:35:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 62356 00:06:56.971 [2024-12-12 19:35:39.625973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.971 [2024-12-12 19:35:39.626001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:56.971 [2024-12-12 19:35:39.642957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.908 19:35:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:57.908 00:06:57.908 real 0m2.232s 00:06:57.908 user 0m2.376s 00:06:57.908 sys 0m0.325s 00:06:57.908 19:35:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.908 19:35:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 ************************************ 00:06:57.909 END TEST raid1_resize_test 00:06:57.909 ************************************ 00:06:58.168 19:35:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:58.168 19:35:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:58.168 19:35:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:58.168 19:35:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.168 19:35:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.168 19:35:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.168 ************************************ 00:06:58.168 START TEST raid_state_function_test 00:06:58.168 ************************************ 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.168 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62413 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62413' 00:06:58.169 Process raid pid: 62413 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62413 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62413 ']' 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.169 19:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.169 [2024-12-12 19:35:40.907027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:58.169 [2024-12-12 19:35:40.907659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.428 [2024-12-12 19:35:41.082694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.428 [2024-12-12 19:35:41.199514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.687 [2024-12-12 19:35:41.399931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.687 [2024-12-12 19:35:41.400049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.946 [2024-12-12 19:35:41.734803] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.946 [2024-12-12 19:35:41.734907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.946 [2024-12-12 19:35:41.734936] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.946 [2024-12-12 19:35:41.734958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.946 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.947 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.947 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.947 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.947 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.947 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.206 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.206 "name": "Existed_Raid", 00:06:59.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.206 "strip_size_kb": 64, 00:06:59.206 "state": "configuring", 00:06:59.206 "raid_level": "raid0", 00:06:59.206 "superblock": false, 00:06:59.206 "num_base_bdevs": 2, 00:06:59.206 "num_base_bdevs_discovered": 0, 00:06:59.206 "num_base_bdevs_operational": 2, 00:06:59.206 "base_bdevs_list": [ 00:06:59.206 { 00:06:59.206 "name": "BaseBdev1", 00:06:59.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.206 "is_configured": false, 00:06:59.206 "data_offset": 0, 00:06:59.206 "data_size": 0 00:06:59.206 }, 00:06:59.206 { 00:06:59.206 "name": "BaseBdev2", 00:06:59.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.206 "is_configured": false, 00:06:59.206 "data_offset": 0, 00:06:59.206 "data_size": 0 00:06:59.206 } 00:06:59.206 ] 00:06:59.206 }' 00:06:59.206 19:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.206 19:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 [2024-12-12 19:35:42.158034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.465 [2024-12-12 19:35:42.158072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 [2024-12-12 19:35:42.169990] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.465 [2024-12-12 19:35:42.170033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.465 [2024-12-12 19:35:42.170044] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.465 [2024-12-12 19:35:42.170055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 [2024-12-12 19:35:42.219289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.465 BaseBdev1 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 [ 00:06:59.465 { 00:06:59.465 "name": "BaseBdev1", 00:06:59.465 "aliases": [ 00:06:59.465 "2e7b294e-9c46-476b-bd69-5c10b9444024" 00:06:59.465 ], 00:06:59.465 "product_name": "Malloc disk", 00:06:59.465 "block_size": 512, 00:06:59.465 "num_blocks": 65536, 00:06:59.465 "uuid": "2e7b294e-9c46-476b-bd69-5c10b9444024", 00:06:59.465 "assigned_rate_limits": { 00:06:59.465 "rw_ios_per_sec": 0, 00:06:59.465 "rw_mbytes_per_sec": 0, 00:06:59.465 "r_mbytes_per_sec": 0, 00:06:59.465 "w_mbytes_per_sec": 0 00:06:59.465 }, 00:06:59.465 "claimed": true, 00:06:59.465 "claim_type": "exclusive_write", 00:06:59.465 "zoned": false, 00:06:59.465 "supported_io_types": { 00:06:59.465 "read": true, 00:06:59.465 "write": true, 00:06:59.465 "unmap": true, 00:06:59.465 "flush": true, 00:06:59.465 "reset": true, 00:06:59.465 "nvme_admin": false, 00:06:59.465 "nvme_io": false, 00:06:59.465 "nvme_io_md": false, 00:06:59.465 "write_zeroes": true, 00:06:59.465 "zcopy": true, 00:06:59.465 "get_zone_info": false, 00:06:59.465 "zone_management": false, 00:06:59.465 "zone_append": false, 00:06:59.465 "compare": false, 00:06:59.465 "compare_and_write": false, 00:06:59.465 "abort": true, 00:06:59.465 "seek_hole": false, 00:06:59.465 "seek_data": false, 00:06:59.465 "copy": true, 00:06:59.465 "nvme_iov_md": false 00:06:59.465 }, 00:06:59.465 "memory_domains": [ 00:06:59.465 { 00:06:59.465 "dma_device_id": "system", 00:06:59.465 "dma_device_type": 1 00:06:59.465 }, 00:06:59.465 { 00:06:59.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.465 "dma_device_type": 2 00:06:59.465 } 00:06:59.465 ], 00:06:59.465 "driver_specific": {} 00:06:59.465 } 00:06:59.465 ] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.465 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.724 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.724 "name": "Existed_Raid", 00:06:59.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.724 "strip_size_kb": 64, 00:06:59.724 "state": "configuring", 00:06:59.724 "raid_level": "raid0", 00:06:59.724 "superblock": false, 00:06:59.724 "num_base_bdevs": 2, 00:06:59.724 "num_base_bdevs_discovered": 1, 00:06:59.724 "num_base_bdevs_operational": 2, 00:06:59.724 "base_bdevs_list": [ 00:06:59.724 { 00:06:59.724 "name": "BaseBdev1", 00:06:59.724 "uuid": "2e7b294e-9c46-476b-bd69-5c10b9444024", 00:06:59.724 "is_configured": true, 00:06:59.724 "data_offset": 0, 00:06:59.724 "data_size": 65536 00:06:59.724 }, 00:06:59.724 { 00:06:59.724 "name": "BaseBdev2", 00:06:59.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.724 "is_configured": false, 00:06:59.724 "data_offset": 0, 00:06:59.724 "data_size": 0 00:06:59.724 } 00:06:59.724 ] 00:06:59.724 }' 00:06:59.724 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.724 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 [2024-12-12 19:35:42.718536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.983 [2024-12-12 19:35:42.718681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 [2024-12-12 19:35:42.730530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.983 [2024-12-12 19:35:42.732354] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.983 [2024-12-12 19:35:42.732435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.983 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.983 "name": "Existed_Raid", 00:06:59.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.983 "strip_size_kb": 64, 00:06:59.983 "state": "configuring", 00:06:59.983 "raid_level": "raid0", 00:06:59.983 "superblock": false, 00:06:59.984 "num_base_bdevs": 2, 00:06:59.984 "num_base_bdevs_discovered": 1, 00:06:59.984 "num_base_bdevs_operational": 2, 00:06:59.984 "base_bdevs_list": [ 00:06:59.984 { 00:06:59.984 "name": "BaseBdev1", 00:06:59.984 "uuid": "2e7b294e-9c46-476b-bd69-5c10b9444024", 00:06:59.984 "is_configured": true, 00:06:59.984 "data_offset": 0, 00:06:59.984 "data_size": 65536 00:06:59.984 }, 00:06:59.984 { 00:06:59.984 "name": "BaseBdev2", 00:06:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.984 "is_configured": false, 00:06:59.984 "data_offset": 0, 00:06:59.984 "data_size": 0 00:06:59.984 } 00:06:59.984 ] 00:06:59.984 }' 00:06:59.984 19:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.984 19:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.552 [2024-12-12 19:35:43.181502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.552 [2024-12-12 19:35:43.181621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:00.552 [2024-12-12 19:35:43.181651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.552 [2024-12-12 19:35:43.181945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.552 [2024-12-12 19:35:43.182121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:00.552 [2024-12-12 19:35:43.182133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:00.552 [2024-12-12 19:35:43.182382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.552 BaseBdev2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.552 [ 00:07:00.552 { 00:07:00.552 "name": "BaseBdev2", 00:07:00.552 "aliases": [ 00:07:00.552 "374c3bea-3409-4fbd-9aef-ee60d4e7502b" 00:07:00.552 ], 00:07:00.552 "product_name": "Malloc disk", 00:07:00.552 "block_size": 512, 00:07:00.552 "num_blocks": 65536, 00:07:00.552 "uuid": "374c3bea-3409-4fbd-9aef-ee60d4e7502b", 00:07:00.552 "assigned_rate_limits": { 00:07:00.552 "rw_ios_per_sec": 0, 00:07:00.552 "rw_mbytes_per_sec": 0, 00:07:00.552 "r_mbytes_per_sec": 0, 00:07:00.552 "w_mbytes_per_sec": 0 00:07:00.552 }, 00:07:00.552 "claimed": true, 00:07:00.552 "claim_type": "exclusive_write", 00:07:00.552 "zoned": false, 00:07:00.552 "supported_io_types": { 00:07:00.552 "read": true, 00:07:00.552 "write": true, 00:07:00.552 "unmap": true, 00:07:00.552 "flush": true, 00:07:00.552 "reset": true, 00:07:00.552 "nvme_admin": false, 00:07:00.552 "nvme_io": false, 00:07:00.552 "nvme_io_md": false, 00:07:00.552 "write_zeroes": true, 00:07:00.552 "zcopy": true, 00:07:00.552 "get_zone_info": false, 00:07:00.552 "zone_management": false, 00:07:00.552 "zone_append": false, 00:07:00.552 "compare": false, 00:07:00.552 "compare_and_write": false, 00:07:00.552 "abort": true, 00:07:00.552 "seek_hole": false, 00:07:00.552 "seek_data": false, 00:07:00.552 "copy": true, 00:07:00.552 "nvme_iov_md": false 00:07:00.552 }, 00:07:00.552 "memory_domains": [ 00:07:00.552 { 00:07:00.552 "dma_device_id": "system", 00:07:00.552 "dma_device_type": 1 00:07:00.552 }, 00:07:00.552 { 00:07:00.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.552 "dma_device_type": 2 00:07:00.552 } 00:07:00.552 ], 00:07:00.552 "driver_specific": {} 00:07:00.552 } 00:07:00.552 ] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.552 "name": "Existed_Raid", 00:07:00.552 "uuid": "b2e60896-d88f-4845-89f1-d06362d08b1f", 00:07:00.552 "strip_size_kb": 64, 00:07:00.552 "state": "online", 00:07:00.552 "raid_level": "raid0", 00:07:00.552 "superblock": false, 00:07:00.552 "num_base_bdevs": 2, 00:07:00.552 "num_base_bdevs_discovered": 2, 00:07:00.552 "num_base_bdevs_operational": 2, 00:07:00.552 "base_bdevs_list": [ 00:07:00.552 { 00:07:00.552 "name": "BaseBdev1", 00:07:00.552 "uuid": "2e7b294e-9c46-476b-bd69-5c10b9444024", 00:07:00.552 "is_configured": true, 00:07:00.552 "data_offset": 0, 00:07:00.552 "data_size": 65536 00:07:00.552 }, 00:07:00.552 { 00:07:00.552 "name": "BaseBdev2", 00:07:00.552 "uuid": "374c3bea-3409-4fbd-9aef-ee60d4e7502b", 00:07:00.552 "is_configured": true, 00:07:00.552 "data_offset": 0, 00:07:00.552 "data_size": 65536 00:07:00.552 } 00:07:00.552 ] 00:07:00.552 }' 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.552 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 [2024-12-12 19:35:43.668999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.123 "name": "Existed_Raid", 00:07:01.123 "aliases": [ 00:07:01.123 "b2e60896-d88f-4845-89f1-d06362d08b1f" 00:07:01.123 ], 00:07:01.123 "product_name": "Raid Volume", 00:07:01.123 "block_size": 512, 00:07:01.123 "num_blocks": 131072, 00:07:01.123 "uuid": "b2e60896-d88f-4845-89f1-d06362d08b1f", 00:07:01.123 "assigned_rate_limits": { 00:07:01.123 "rw_ios_per_sec": 0, 00:07:01.123 "rw_mbytes_per_sec": 0, 00:07:01.123 "r_mbytes_per_sec": 0, 00:07:01.123 "w_mbytes_per_sec": 0 00:07:01.123 }, 00:07:01.123 "claimed": false, 00:07:01.123 "zoned": false, 00:07:01.123 "supported_io_types": { 00:07:01.123 "read": true, 00:07:01.123 "write": true, 00:07:01.123 "unmap": true, 00:07:01.123 "flush": true, 00:07:01.123 "reset": true, 00:07:01.123 "nvme_admin": false, 00:07:01.123 "nvme_io": false, 00:07:01.123 "nvme_io_md": false, 00:07:01.123 "write_zeroes": true, 00:07:01.123 "zcopy": false, 00:07:01.123 "get_zone_info": false, 00:07:01.123 "zone_management": false, 00:07:01.123 "zone_append": false, 00:07:01.123 "compare": false, 00:07:01.123 "compare_and_write": false, 00:07:01.123 "abort": false, 00:07:01.123 "seek_hole": false, 00:07:01.123 "seek_data": false, 00:07:01.123 "copy": false, 00:07:01.123 "nvme_iov_md": false 00:07:01.123 }, 00:07:01.123 "memory_domains": [ 00:07:01.123 { 00:07:01.123 "dma_device_id": "system", 00:07:01.123 "dma_device_type": 1 00:07:01.123 }, 00:07:01.123 { 00:07:01.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.123 "dma_device_type": 2 00:07:01.123 }, 00:07:01.123 { 00:07:01.123 "dma_device_id": "system", 00:07:01.123 "dma_device_type": 1 00:07:01.123 }, 00:07:01.123 { 00:07:01.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.123 "dma_device_type": 2 00:07:01.123 } 00:07:01.123 ], 00:07:01.123 "driver_specific": { 00:07:01.123 "raid": { 00:07:01.123 "uuid": "b2e60896-d88f-4845-89f1-d06362d08b1f", 00:07:01.123 "strip_size_kb": 64, 00:07:01.123 "state": "online", 00:07:01.123 "raid_level": "raid0", 00:07:01.123 "superblock": false, 00:07:01.123 "num_base_bdevs": 2, 00:07:01.123 "num_base_bdevs_discovered": 2, 00:07:01.123 "num_base_bdevs_operational": 2, 00:07:01.123 "base_bdevs_list": [ 00:07:01.123 { 00:07:01.123 "name": "BaseBdev1", 00:07:01.123 "uuid": "2e7b294e-9c46-476b-bd69-5c10b9444024", 00:07:01.123 "is_configured": true, 00:07:01.123 "data_offset": 0, 00:07:01.123 "data_size": 65536 00:07:01.123 }, 00:07:01.123 { 00:07:01.123 "name": "BaseBdev2", 00:07:01.123 "uuid": "374c3bea-3409-4fbd-9aef-ee60d4e7502b", 00:07:01.123 "is_configured": true, 00:07:01.123 "data_offset": 0, 00:07:01.123 "data_size": 65536 00:07:01.123 } 00:07:01.123 ] 00:07:01.123 } 00:07:01.123 } 00:07:01.123 }' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:01.123 BaseBdev2' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.123 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 [2024-12-12 19:35:43.876408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.123 [2024-12-12 19:35:43.876483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.123 [2024-12-12 19:35:43.876571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 19:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.384 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.384 "name": "Existed_Raid", 00:07:01.384 "uuid": "b2e60896-d88f-4845-89f1-d06362d08b1f", 00:07:01.384 "strip_size_kb": 64, 00:07:01.384 "state": "offline", 00:07:01.384 "raid_level": "raid0", 00:07:01.384 "superblock": false, 00:07:01.384 "num_base_bdevs": 2, 00:07:01.384 "num_base_bdevs_discovered": 1, 00:07:01.384 "num_base_bdevs_operational": 1, 00:07:01.384 "base_bdevs_list": [ 00:07:01.384 { 00:07:01.384 "name": null, 00:07:01.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.384 "is_configured": false, 00:07:01.384 "data_offset": 0, 00:07:01.384 "data_size": 65536 00:07:01.384 }, 00:07:01.384 { 00:07:01.384 "name": "BaseBdev2", 00:07:01.384 "uuid": "374c3bea-3409-4fbd-9aef-ee60d4e7502b", 00:07:01.384 "is_configured": true, 00:07:01.384 "data_offset": 0, 00:07:01.384 "data_size": 65536 00:07:01.384 } 00:07:01.384 ] 00:07:01.384 }' 00:07:01.384 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.384 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.643 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.643 [2024-12-12 19:35:44.447926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.643 [2024-12-12 19:35:44.448026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62413 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62413 ']' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62413 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62413 00:07:01.903 killing process with pid 62413 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62413' 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62413 00:07:01.903 [2024-12-12 19:35:44.632308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.903 19:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62413 00:07:01.903 [2024-12-12 19:35:44.650401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.287 19:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:03.287 00:07:03.287 real 0m4.932s 00:07:03.288 user 0m7.089s 00:07:03.288 sys 0m0.815s 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.288 ************************************ 00:07:03.288 END TEST raid_state_function_test 00:07:03.288 ************************************ 00:07:03.288 19:35:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:03.288 19:35:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.288 19:35:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.288 19:35:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.288 ************************************ 00:07:03.288 START TEST raid_state_function_test_sb 00:07:03.288 ************************************ 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62666 00:07:03.288 Process raid pid: 62666 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62666' 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62666 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62666 ']' 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.288 19:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.288 [2024-12-12 19:35:45.901653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:03.288 [2024-12-12 19:35:45.901764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.288 [2024-12-12 19:35:46.052025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.548 [2024-12-12 19:35:46.165365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.548 [2024-12-12 19:35:46.372712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.548 [2024-12-12 19:35:46.372839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.117 [2024-12-12 19:35:46.734557] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.117 [2024-12-12 19:35:46.734611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.117 [2024-12-12 19:35:46.734634] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.117 [2024-12-12 19:35:46.734660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.117 "name": "Existed_Raid", 00:07:04.117 "uuid": "7d0ee019-59f5-4ac8-93c8-b34bc34ca6c8", 00:07:04.117 "strip_size_kb": 64, 00:07:04.117 "state": "configuring", 00:07:04.117 "raid_level": "raid0", 00:07:04.117 "superblock": true, 00:07:04.117 "num_base_bdevs": 2, 00:07:04.117 "num_base_bdevs_discovered": 0, 00:07:04.117 "num_base_bdevs_operational": 2, 00:07:04.117 "base_bdevs_list": [ 00:07:04.117 { 00:07:04.117 "name": "BaseBdev1", 00:07:04.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.117 "is_configured": false, 00:07:04.117 "data_offset": 0, 00:07:04.117 "data_size": 0 00:07:04.117 }, 00:07:04.117 { 00:07:04.117 "name": "BaseBdev2", 00:07:04.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.117 "is_configured": false, 00:07:04.117 "data_offset": 0, 00:07:04.117 "data_size": 0 00:07:04.117 } 00:07:04.117 ] 00:07:04.117 }' 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.117 19:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.386 [2024-12-12 19:35:47.213644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.386 [2024-12-12 19:35:47.213736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.386 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 [2024-12-12 19:35:47.225620] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.660 [2024-12-12 19:35:47.225705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.660 [2024-12-12 19:35:47.225733] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.660 [2024-12-12 19:35:47.225759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 [2024-12-12 19:35:47.271894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.660 BaseBdev1 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.660 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 [ 00:07:04.660 { 00:07:04.660 "name": "BaseBdev1", 00:07:04.660 "aliases": [ 00:07:04.660 "de42ec3d-159e-4521-bbd2-f19ed5ac2119" 00:07:04.660 ], 00:07:04.660 "product_name": "Malloc disk", 00:07:04.660 "block_size": 512, 00:07:04.660 "num_blocks": 65536, 00:07:04.660 "uuid": "de42ec3d-159e-4521-bbd2-f19ed5ac2119", 00:07:04.660 "assigned_rate_limits": { 00:07:04.660 "rw_ios_per_sec": 0, 00:07:04.660 "rw_mbytes_per_sec": 0, 00:07:04.660 "r_mbytes_per_sec": 0, 00:07:04.660 "w_mbytes_per_sec": 0 00:07:04.660 }, 00:07:04.660 "claimed": true, 00:07:04.660 "claim_type": "exclusive_write", 00:07:04.660 "zoned": false, 00:07:04.660 "supported_io_types": { 00:07:04.660 "read": true, 00:07:04.660 "write": true, 00:07:04.660 "unmap": true, 00:07:04.660 "flush": true, 00:07:04.660 "reset": true, 00:07:04.660 "nvme_admin": false, 00:07:04.660 "nvme_io": false, 00:07:04.660 "nvme_io_md": false, 00:07:04.660 "write_zeroes": true, 00:07:04.660 "zcopy": true, 00:07:04.660 "get_zone_info": false, 00:07:04.660 "zone_management": false, 00:07:04.660 "zone_append": false, 00:07:04.660 "compare": false, 00:07:04.660 "compare_and_write": false, 00:07:04.660 "abort": true, 00:07:04.660 "seek_hole": false, 00:07:04.660 "seek_data": false, 00:07:04.661 "copy": true, 00:07:04.661 "nvme_iov_md": false 00:07:04.661 }, 00:07:04.661 "memory_domains": [ 00:07:04.661 { 00:07:04.661 "dma_device_id": "system", 00:07:04.661 "dma_device_type": 1 00:07:04.661 }, 00:07:04.661 { 00:07:04.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.661 "dma_device_type": 2 00:07:04.661 } 00:07:04.661 ], 00:07:04.661 "driver_specific": {} 00:07:04.661 } 00:07:04.661 ] 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.661 "name": "Existed_Raid", 00:07:04.661 "uuid": "49f39191-c2fc-408a-aee1-bd09848f1971", 00:07:04.661 "strip_size_kb": 64, 00:07:04.661 "state": "configuring", 00:07:04.661 "raid_level": "raid0", 00:07:04.661 "superblock": true, 00:07:04.661 "num_base_bdevs": 2, 00:07:04.661 "num_base_bdevs_discovered": 1, 00:07:04.661 "num_base_bdevs_operational": 2, 00:07:04.661 "base_bdevs_list": [ 00:07:04.661 { 00:07:04.661 "name": "BaseBdev1", 00:07:04.661 "uuid": "de42ec3d-159e-4521-bbd2-f19ed5ac2119", 00:07:04.661 "is_configured": true, 00:07:04.661 "data_offset": 2048, 00:07:04.661 "data_size": 63488 00:07:04.661 }, 00:07:04.661 { 00:07:04.661 "name": "BaseBdev2", 00:07:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.661 "is_configured": false, 00:07:04.661 "data_offset": 0, 00:07:04.661 "data_size": 0 00:07:04.661 } 00:07:04.661 ] 00:07:04.661 }' 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.661 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.230 [2024-12-12 19:35:47.771122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.230 [2024-12-12 19:35:47.771280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.230 [2024-12-12 19:35:47.783186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.230 [2024-12-12 19:35:47.785011] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.230 [2024-12-12 19:35:47.785061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.230 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.230 "name": "Existed_Raid", 00:07:05.230 "uuid": "fb47b5e1-30d3-4b7a-9a55-07782581cca9", 00:07:05.230 "strip_size_kb": 64, 00:07:05.230 "state": "configuring", 00:07:05.230 "raid_level": "raid0", 00:07:05.230 "superblock": true, 00:07:05.230 "num_base_bdevs": 2, 00:07:05.230 "num_base_bdevs_discovered": 1, 00:07:05.230 "num_base_bdevs_operational": 2, 00:07:05.230 "base_bdevs_list": [ 00:07:05.230 { 00:07:05.230 "name": "BaseBdev1", 00:07:05.230 "uuid": "de42ec3d-159e-4521-bbd2-f19ed5ac2119", 00:07:05.230 "is_configured": true, 00:07:05.230 "data_offset": 2048, 00:07:05.230 "data_size": 63488 00:07:05.230 }, 00:07:05.230 { 00:07:05.230 "name": "BaseBdev2", 00:07:05.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.231 "is_configured": false, 00:07:05.231 "data_offset": 0, 00:07:05.231 "data_size": 0 00:07:05.231 } 00:07:05.231 ] 00:07:05.231 }' 00:07:05.231 19:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.231 19:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.490 [2024-12-12 19:35:48.228203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.490 [2024-12-12 19:35:48.228600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:05.490 [2024-12-12 19:35:48.228652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.490 [2024-12-12 19:35:48.228950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.490 [2024-12-12 19:35:48.229149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:05.490 [2024-12-12 19:35:48.229221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:07:05.490 id_bdev 0x617000007e80 00:07:05.490 [2024-12-12 19:35:48.229456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.490 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.490 [ 00:07:05.490 { 00:07:05.490 "name": "BaseBdev2", 00:07:05.490 "aliases": [ 00:07:05.490 "23127d91-e0df-48ce-ab4a-b128d6bc4ce5" 00:07:05.490 ], 00:07:05.490 "product_name": "Malloc disk", 00:07:05.490 "block_size": 512, 00:07:05.490 "num_blocks": 65536, 00:07:05.490 "uuid": "23127d91-e0df-48ce-ab4a-b128d6bc4ce5", 00:07:05.490 "assigned_rate_limits": { 00:07:05.490 "rw_ios_per_sec": 0, 00:07:05.490 "rw_mbytes_per_sec": 0, 00:07:05.490 "r_mbytes_per_sec": 0, 00:07:05.490 "w_mbytes_per_sec": 0 00:07:05.490 }, 00:07:05.490 "claimed": true, 00:07:05.490 "claim_type": "exclusive_write", 00:07:05.490 "zoned": false, 00:07:05.490 "supported_io_types": { 00:07:05.490 "read": true, 00:07:05.490 "write": true, 00:07:05.490 "unmap": true, 00:07:05.490 "flush": true, 00:07:05.490 "reset": true, 00:07:05.490 "nvme_admin": false, 00:07:05.490 "nvme_io": false, 00:07:05.490 "nvme_io_md": false, 00:07:05.490 "write_zeroes": true, 00:07:05.490 "zcopy": true, 00:07:05.490 "get_zone_info": false, 00:07:05.490 "zone_management": false, 00:07:05.490 "zone_append": false, 00:07:05.490 "compare": false, 00:07:05.490 "compare_and_write": false, 00:07:05.490 "abort": true, 00:07:05.490 "seek_hole": false, 00:07:05.490 "seek_data": false, 00:07:05.490 "copy": true, 00:07:05.490 "nvme_iov_md": false 00:07:05.490 }, 00:07:05.490 "memory_domains": [ 00:07:05.490 { 00:07:05.490 "dma_device_id": "system", 00:07:05.490 "dma_device_type": 1 00:07:05.490 }, 00:07:05.490 { 00:07:05.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.490 "dma_device_type": 2 00:07:05.490 } 00:07:05.490 ], 00:07:05.490 "driver_specific": {} 00:07:05.490 } 00:07:05.490 ] 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.491 "name": "Existed_Raid", 00:07:05.491 "uuid": "fb47b5e1-30d3-4b7a-9a55-07782581cca9", 00:07:05.491 "strip_size_kb": 64, 00:07:05.491 "state": "online", 00:07:05.491 "raid_level": "raid0", 00:07:05.491 "superblock": true, 00:07:05.491 "num_base_bdevs": 2, 00:07:05.491 "num_base_bdevs_discovered": 2, 00:07:05.491 "num_base_bdevs_operational": 2, 00:07:05.491 "base_bdevs_list": [ 00:07:05.491 { 00:07:05.491 "name": "BaseBdev1", 00:07:05.491 "uuid": "de42ec3d-159e-4521-bbd2-f19ed5ac2119", 00:07:05.491 "is_configured": true, 00:07:05.491 "data_offset": 2048, 00:07:05.491 "data_size": 63488 00:07:05.491 }, 00:07:05.491 { 00:07:05.491 "name": "BaseBdev2", 00:07:05.491 "uuid": "23127d91-e0df-48ce-ab4a-b128d6bc4ce5", 00:07:05.491 "is_configured": true, 00:07:05.491 "data_offset": 2048, 00:07:05.491 "data_size": 63488 00:07:05.491 } 00:07:05.491 ] 00:07:05.491 }' 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.491 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.060 [2024-12-12 19:35:48.719708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:06.060 "name": "Existed_Raid", 00:07:06.060 "aliases": [ 00:07:06.060 "fb47b5e1-30d3-4b7a-9a55-07782581cca9" 00:07:06.060 ], 00:07:06.060 "product_name": "Raid Volume", 00:07:06.060 "block_size": 512, 00:07:06.060 "num_blocks": 126976, 00:07:06.060 "uuid": "fb47b5e1-30d3-4b7a-9a55-07782581cca9", 00:07:06.060 "assigned_rate_limits": { 00:07:06.060 "rw_ios_per_sec": 0, 00:07:06.060 "rw_mbytes_per_sec": 0, 00:07:06.060 "r_mbytes_per_sec": 0, 00:07:06.060 "w_mbytes_per_sec": 0 00:07:06.060 }, 00:07:06.060 "claimed": false, 00:07:06.060 "zoned": false, 00:07:06.060 "supported_io_types": { 00:07:06.060 "read": true, 00:07:06.060 "write": true, 00:07:06.060 "unmap": true, 00:07:06.060 "flush": true, 00:07:06.060 "reset": true, 00:07:06.060 "nvme_admin": false, 00:07:06.060 "nvme_io": false, 00:07:06.060 "nvme_io_md": false, 00:07:06.060 "write_zeroes": true, 00:07:06.060 "zcopy": false, 00:07:06.060 "get_zone_info": false, 00:07:06.060 "zone_management": false, 00:07:06.060 "zone_append": false, 00:07:06.060 "compare": false, 00:07:06.060 "compare_and_write": false, 00:07:06.060 "abort": false, 00:07:06.060 "seek_hole": false, 00:07:06.060 "seek_data": false, 00:07:06.060 "copy": false, 00:07:06.060 "nvme_iov_md": false 00:07:06.060 }, 00:07:06.060 "memory_domains": [ 00:07:06.060 { 00:07:06.060 "dma_device_id": "system", 00:07:06.060 "dma_device_type": 1 00:07:06.060 }, 00:07:06.060 { 00:07:06.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.060 "dma_device_type": 2 00:07:06.060 }, 00:07:06.060 { 00:07:06.060 "dma_device_id": "system", 00:07:06.060 "dma_device_type": 1 00:07:06.060 }, 00:07:06.060 { 00:07:06.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.060 "dma_device_type": 2 00:07:06.060 } 00:07:06.060 ], 00:07:06.060 "driver_specific": { 00:07:06.060 "raid": { 00:07:06.060 "uuid": "fb47b5e1-30d3-4b7a-9a55-07782581cca9", 00:07:06.060 "strip_size_kb": 64, 00:07:06.060 "state": "online", 00:07:06.060 "raid_level": "raid0", 00:07:06.060 "superblock": true, 00:07:06.060 "num_base_bdevs": 2, 00:07:06.060 "num_base_bdevs_discovered": 2, 00:07:06.060 "num_base_bdevs_operational": 2, 00:07:06.060 "base_bdevs_list": [ 00:07:06.060 { 00:07:06.060 "name": "BaseBdev1", 00:07:06.060 "uuid": "de42ec3d-159e-4521-bbd2-f19ed5ac2119", 00:07:06.060 "is_configured": true, 00:07:06.060 "data_offset": 2048, 00:07:06.060 "data_size": 63488 00:07:06.060 }, 00:07:06.060 { 00:07:06.060 "name": "BaseBdev2", 00:07:06.060 "uuid": "23127d91-e0df-48ce-ab4a-b128d6bc4ce5", 00:07:06.060 "is_configured": true, 00:07:06.060 "data_offset": 2048, 00:07:06.060 "data_size": 63488 00:07:06.060 } 00:07:06.060 ] 00:07:06.060 } 00:07:06.060 } 00:07:06.060 }' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:06.060 BaseBdev2' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.060 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.319 19:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.319 [2024-12-12 19:35:48.971033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:06.319 [2024-12-12 19:35:48.971107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.319 [2024-12-12 19:35:48.971175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.319 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.319 "name": "Existed_Raid", 00:07:06.319 "uuid": "fb47b5e1-30d3-4b7a-9a55-07782581cca9", 00:07:06.319 "strip_size_kb": 64, 00:07:06.319 "state": "offline", 00:07:06.319 "raid_level": "raid0", 00:07:06.319 "superblock": true, 00:07:06.319 "num_base_bdevs": 2, 00:07:06.319 "num_base_bdevs_discovered": 1, 00:07:06.319 "num_base_bdevs_operational": 1, 00:07:06.319 "base_bdevs_list": [ 00:07:06.319 { 00:07:06.319 "name": null, 00:07:06.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.319 "is_configured": false, 00:07:06.319 "data_offset": 0, 00:07:06.320 "data_size": 63488 00:07:06.320 }, 00:07:06.320 { 00:07:06.320 "name": "BaseBdev2", 00:07:06.320 "uuid": "23127d91-e0df-48ce-ab4a-b128d6bc4ce5", 00:07:06.320 "is_configured": true, 00:07:06.320 "data_offset": 2048, 00:07:06.320 "data_size": 63488 00:07:06.320 } 00:07:06.320 ] 00:07:06.320 }' 00:07:06.320 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.320 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.887 [2024-12-12 19:35:49.561133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:06.887 [2024-12-12 19:35:49.561249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.887 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62666 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62666 ']' 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62666 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.888 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62666 00:07:07.147 killing process with pid 62666 00:07:07.147 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.147 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.147 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62666' 00:07:07.147 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62666 00:07:07.147 [2024-12-12 19:35:49.750200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.147 19:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62666 00:07:07.147 [2024-12-12 19:35:49.766307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.086 19:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:08.086 00:07:08.086 real 0m5.072s 00:07:08.086 user 0m7.341s 00:07:08.086 sys 0m0.804s 00:07:08.086 ************************************ 00:07:08.086 END TEST raid_state_function_test_sb 00:07:08.086 ************************************ 00:07:08.086 19:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.086 19:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 19:35:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:08.087 19:35:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.087 19:35:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.087 19:35:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 ************************************ 00:07:08.346 START TEST raid_superblock_test 00:07:08.346 ************************************ 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62913 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62913 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62913 ']' 00:07:08.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.346 19:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 [2024-12-12 19:35:51.037601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:08.346 [2024-12-12 19:35:51.037821] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62913 ] 00:07:08.606 [2024-12-12 19:35:51.193646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.606 [2024-12-12 19:35:51.309812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.866 [2024-12-12 19:35:51.509384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.866 [2024-12-12 19:35:51.509567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.126 malloc1 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.126 [2024-12-12 19:35:51.925806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.126 [2024-12-12 19:35:51.925865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.126 [2024-12-12 19:35:51.925887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:09.126 [2024-12-12 19:35:51.925897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.126 [2024-12-12 19:35:51.928073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.126 [2024-12-12 19:35:51.928108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.126 pt1 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.126 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 malloc2 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 [2024-12-12 19:35:51.982343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.386 [2024-12-12 19:35:51.982456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.386 [2024-12-12 19:35:51.982525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:09.386 [2024-12-12 19:35:51.982582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.386 [2024-12-12 19:35:51.984781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.386 [2024-12-12 19:35:51.984863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.386 pt2 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 [2024-12-12 19:35:51.994368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.386 [2024-12-12 19:35:51.996119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.386 [2024-12-12 19:35:51.996322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.386 [2024-12-12 19:35:51.996378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.386 [2024-12-12 19:35:51.996686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.386 [2024-12-12 19:35:51.996885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.386 [2024-12-12 19:35:51.996927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:09.386 [2024-12-12 19:35:51.997139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.386 19:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.386 "name": "raid_bdev1", 00:07:09.386 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:09.386 "strip_size_kb": 64, 00:07:09.386 "state": "online", 00:07:09.386 "raid_level": "raid0", 00:07:09.386 "superblock": true, 00:07:09.386 "num_base_bdevs": 2, 00:07:09.386 "num_base_bdevs_discovered": 2, 00:07:09.386 "num_base_bdevs_operational": 2, 00:07:09.386 "base_bdevs_list": [ 00:07:09.386 { 00:07:09.386 "name": "pt1", 00:07:09.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.386 "is_configured": true, 00:07:09.386 "data_offset": 2048, 00:07:09.386 "data_size": 63488 00:07:09.386 }, 00:07:09.386 { 00:07:09.386 "name": "pt2", 00:07:09.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.386 "is_configured": true, 00:07:09.386 "data_offset": 2048, 00:07:09.386 "data_size": 63488 00:07:09.386 } 00:07:09.386 ] 00:07:09.386 }' 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.386 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 [2024-12-12 19:35:52.457853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.645 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.905 "name": "raid_bdev1", 00:07:09.905 "aliases": [ 00:07:09.905 "cd0b28f9-476d-410b-9222-2b78d4dc3fd3" 00:07:09.905 ], 00:07:09.905 "product_name": "Raid Volume", 00:07:09.905 "block_size": 512, 00:07:09.905 "num_blocks": 126976, 00:07:09.905 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:09.905 "assigned_rate_limits": { 00:07:09.905 "rw_ios_per_sec": 0, 00:07:09.905 "rw_mbytes_per_sec": 0, 00:07:09.905 "r_mbytes_per_sec": 0, 00:07:09.905 "w_mbytes_per_sec": 0 00:07:09.905 }, 00:07:09.905 "claimed": false, 00:07:09.905 "zoned": false, 00:07:09.905 "supported_io_types": { 00:07:09.905 "read": true, 00:07:09.905 "write": true, 00:07:09.905 "unmap": true, 00:07:09.905 "flush": true, 00:07:09.905 "reset": true, 00:07:09.905 "nvme_admin": false, 00:07:09.905 "nvme_io": false, 00:07:09.905 "nvme_io_md": false, 00:07:09.905 "write_zeroes": true, 00:07:09.905 "zcopy": false, 00:07:09.905 "get_zone_info": false, 00:07:09.905 "zone_management": false, 00:07:09.905 "zone_append": false, 00:07:09.905 "compare": false, 00:07:09.905 "compare_and_write": false, 00:07:09.905 "abort": false, 00:07:09.905 "seek_hole": false, 00:07:09.905 "seek_data": false, 00:07:09.905 "copy": false, 00:07:09.905 "nvme_iov_md": false 00:07:09.905 }, 00:07:09.905 "memory_domains": [ 00:07:09.905 { 00:07:09.905 "dma_device_id": "system", 00:07:09.905 "dma_device_type": 1 00:07:09.905 }, 00:07:09.905 { 00:07:09.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.905 "dma_device_type": 2 00:07:09.905 }, 00:07:09.905 { 00:07:09.905 "dma_device_id": "system", 00:07:09.905 "dma_device_type": 1 00:07:09.905 }, 00:07:09.905 { 00:07:09.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.905 "dma_device_type": 2 00:07:09.905 } 00:07:09.905 ], 00:07:09.905 "driver_specific": { 00:07:09.905 "raid": { 00:07:09.905 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:09.905 "strip_size_kb": 64, 00:07:09.905 "state": "online", 00:07:09.905 "raid_level": "raid0", 00:07:09.905 "superblock": true, 00:07:09.905 "num_base_bdevs": 2, 00:07:09.905 "num_base_bdevs_discovered": 2, 00:07:09.905 "num_base_bdevs_operational": 2, 00:07:09.905 "base_bdevs_list": [ 00:07:09.905 { 00:07:09.905 "name": "pt1", 00:07:09.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.905 "is_configured": true, 00:07:09.905 "data_offset": 2048, 00:07:09.905 "data_size": 63488 00:07:09.905 }, 00:07:09.905 { 00:07:09.905 "name": "pt2", 00:07:09.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.905 "is_configured": true, 00:07:09.905 "data_offset": 2048, 00:07:09.905 "data_size": 63488 00:07:09.905 } 00:07:09.905 ] 00:07:09.905 } 00:07:09.905 } 00:07:09.905 }' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.905 pt2' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.905 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.906 [2024-12-12 19:35:52.685404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd0b28f9-476d-410b-9222-2b78d4dc3fd3 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cd0b28f9-476d-410b-9222-2b78d4dc3fd3 ']' 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.906 [2024-12-12 19:35:52.733030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:09.906 [2024-12-12 19:35:52.733054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.906 [2024-12-12 19:35:52.733130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.906 [2024-12-12 19:35:52.733189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.906 [2024-12-12 19:35:52.733202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.906 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 [2024-12-12 19:35:52.868849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:10.166 [2024-12-12 19:35:52.870766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:10.166 [2024-12-12 19:35:52.870827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:10.166 [2024-12-12 19:35:52.870873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:10.166 [2024-12-12 19:35:52.870887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.166 [2024-12-12 19:35:52.870898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:10.166 request: 00:07:10.166 { 00:07:10.166 "name": "raid_bdev1", 00:07:10.166 "raid_level": "raid0", 00:07:10.166 "base_bdevs": [ 00:07:10.166 "malloc1", 00:07:10.166 "malloc2" 00:07:10.166 ], 00:07:10.166 "strip_size_kb": 64, 00:07:10.166 "superblock": false, 00:07:10.166 "method": "bdev_raid_create", 00:07:10.166 "req_id": 1 00:07:10.166 } 00:07:10.166 Got JSON-RPC error response 00:07:10.166 response: 00:07:10.166 { 00:07:10.166 "code": -17, 00:07:10.166 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:10.166 } 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.166 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.167 [2024-12-12 19:35:52.936703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.167 [2024-12-12 19:35:52.936801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.167 [2024-12-12 19:35:52.936853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:10.167 [2024-12-12 19:35:52.936888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.167 [2024-12-12 19:35:52.939221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.167 [2024-12-12 19:35:52.939290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.167 [2024-12-12 19:35:52.939385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:10.167 [2024-12-12 19:35:52.939449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.167 pt1 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.167 "name": "raid_bdev1", 00:07:10.167 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:10.167 "strip_size_kb": 64, 00:07:10.167 "state": "configuring", 00:07:10.167 "raid_level": "raid0", 00:07:10.167 "superblock": true, 00:07:10.167 "num_base_bdevs": 2, 00:07:10.167 "num_base_bdevs_discovered": 1, 00:07:10.167 "num_base_bdevs_operational": 2, 00:07:10.167 "base_bdevs_list": [ 00:07:10.167 { 00:07:10.167 "name": "pt1", 00:07:10.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.167 "is_configured": true, 00:07:10.167 "data_offset": 2048, 00:07:10.167 "data_size": 63488 00:07:10.167 }, 00:07:10.167 { 00:07:10.167 "name": null, 00:07:10.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.167 "is_configured": false, 00:07:10.167 "data_offset": 2048, 00:07:10.167 "data_size": 63488 00:07:10.167 } 00:07:10.167 ] 00:07:10.167 }' 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.167 19:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.735 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.735 [2024-12-12 19:35:53.332069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:10.735 [2024-12-12 19:35:53.332185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.735 [2024-12-12 19:35:53.332212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:10.735 [2024-12-12 19:35:53.332223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.735 [2024-12-12 19:35:53.332754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.735 [2024-12-12 19:35:53.332778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:10.735 [2024-12-12 19:35:53.332866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:10.735 [2024-12-12 19:35:53.332904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:10.735 [2024-12-12 19:35:53.333029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.735 [2024-12-12 19:35:53.333040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.735 [2024-12-12 19:35:53.333310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:10.735 [2024-12-12 19:35:53.333497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.735 [2024-12-12 19:35:53.333514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:10.736 [2024-12-12 19:35:53.333715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.736 pt2 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.736 "name": "raid_bdev1", 00:07:10.736 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:10.736 "strip_size_kb": 64, 00:07:10.736 "state": "online", 00:07:10.736 "raid_level": "raid0", 00:07:10.736 "superblock": true, 00:07:10.736 "num_base_bdevs": 2, 00:07:10.736 "num_base_bdevs_discovered": 2, 00:07:10.736 "num_base_bdevs_operational": 2, 00:07:10.736 "base_bdevs_list": [ 00:07:10.736 { 00:07:10.736 "name": "pt1", 00:07:10.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.736 "is_configured": true, 00:07:10.736 "data_offset": 2048, 00:07:10.736 "data_size": 63488 00:07:10.736 }, 00:07:10.736 { 00:07:10.736 "name": "pt2", 00:07:10.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.736 "is_configured": true, 00:07:10.736 "data_offset": 2048, 00:07:10.736 "data_size": 63488 00:07:10.736 } 00:07:10.736 ] 00:07:10.736 }' 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.736 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.995 [2024-12-12 19:35:53.683706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.995 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.995 "name": "raid_bdev1", 00:07:10.995 "aliases": [ 00:07:10.995 "cd0b28f9-476d-410b-9222-2b78d4dc3fd3" 00:07:10.995 ], 00:07:10.995 "product_name": "Raid Volume", 00:07:10.995 "block_size": 512, 00:07:10.995 "num_blocks": 126976, 00:07:10.996 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:10.996 "assigned_rate_limits": { 00:07:10.996 "rw_ios_per_sec": 0, 00:07:10.996 "rw_mbytes_per_sec": 0, 00:07:10.996 "r_mbytes_per_sec": 0, 00:07:10.996 "w_mbytes_per_sec": 0 00:07:10.996 }, 00:07:10.996 "claimed": false, 00:07:10.996 "zoned": false, 00:07:10.996 "supported_io_types": { 00:07:10.996 "read": true, 00:07:10.996 "write": true, 00:07:10.996 "unmap": true, 00:07:10.996 "flush": true, 00:07:10.996 "reset": true, 00:07:10.996 "nvme_admin": false, 00:07:10.996 "nvme_io": false, 00:07:10.996 "nvme_io_md": false, 00:07:10.996 "write_zeroes": true, 00:07:10.996 "zcopy": false, 00:07:10.996 "get_zone_info": false, 00:07:10.996 "zone_management": false, 00:07:10.996 "zone_append": false, 00:07:10.996 "compare": false, 00:07:10.996 "compare_and_write": false, 00:07:10.996 "abort": false, 00:07:10.996 "seek_hole": false, 00:07:10.996 "seek_data": false, 00:07:10.996 "copy": false, 00:07:10.996 "nvme_iov_md": false 00:07:10.996 }, 00:07:10.996 "memory_domains": [ 00:07:10.996 { 00:07:10.996 "dma_device_id": "system", 00:07:10.996 "dma_device_type": 1 00:07:10.996 }, 00:07:10.996 { 00:07:10.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.996 "dma_device_type": 2 00:07:10.996 }, 00:07:10.996 { 00:07:10.996 "dma_device_id": "system", 00:07:10.996 "dma_device_type": 1 00:07:10.996 }, 00:07:10.996 { 00:07:10.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.996 "dma_device_type": 2 00:07:10.996 } 00:07:10.996 ], 00:07:10.996 "driver_specific": { 00:07:10.996 "raid": { 00:07:10.996 "uuid": "cd0b28f9-476d-410b-9222-2b78d4dc3fd3", 00:07:10.996 "strip_size_kb": 64, 00:07:10.996 "state": "online", 00:07:10.996 "raid_level": "raid0", 00:07:10.996 "superblock": true, 00:07:10.996 "num_base_bdevs": 2, 00:07:10.996 "num_base_bdevs_discovered": 2, 00:07:10.996 "num_base_bdevs_operational": 2, 00:07:10.996 "base_bdevs_list": [ 00:07:10.996 { 00:07:10.996 "name": "pt1", 00:07:10.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.996 "is_configured": true, 00:07:10.996 "data_offset": 2048, 00:07:10.996 "data_size": 63488 00:07:10.996 }, 00:07:10.996 { 00:07:10.996 "name": "pt2", 00:07:10.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.996 "is_configured": true, 00:07:10.996 "data_offset": 2048, 00:07:10.996 "data_size": 63488 00:07:10.996 } 00:07:10.996 ] 00:07:10.996 } 00:07:10.996 } 00:07:10.996 }' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.996 pt2' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.996 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.256 [2024-12-12 19:35:53.911259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cd0b28f9-476d-410b-9222-2b78d4dc3fd3 '!=' cd0b28f9-476d-410b-9222-2b78d4dc3fd3 ']' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62913 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62913 ']' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62913 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62913 00:07:11.256 killing process with pid 62913 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62913' 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62913 00:07:11.256 [2024-12-12 19:35:53.978311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.256 [2024-12-12 19:35:53.978396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.256 [2024-12-12 19:35:53.978445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.256 [2024-12-12 19:35:53.978456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:11.256 19:35:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62913 00:07:11.516 [2024-12-12 19:35:54.184928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.486 19:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:12.486 00:07:12.486 real 0m4.353s 00:07:12.486 user 0m6.076s 00:07:12.486 sys 0m0.731s 00:07:12.486 19:35:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.486 ************************************ 00:07:12.486 END TEST raid_superblock_test 00:07:12.486 ************************************ 00:07:12.486 19:35:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.744 19:35:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:12.744 19:35:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.744 19:35:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.744 19:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.744 ************************************ 00:07:12.744 START TEST raid_read_error_test 00:07:12.744 ************************************ 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:12.744 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XwdSNb0MBE 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63124 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63124 00:07:12.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63124 ']' 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.745 19:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.745 [2024-12-12 19:35:55.462943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:12.745 [2024-12-12 19:35:55.463109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:07:13.003 [2024-12-12 19:35:55.635321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.003 [2024-12-12 19:35:55.751536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.263 [2024-12-12 19:35:55.954775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.263 [2024-12-12 19:35:55.954805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.523 BaseBdev1_malloc 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.523 true 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.523 [2024-12-12 19:35:56.346227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:13.523 [2024-12-12 19:35:56.346352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.523 [2024-12-12 19:35:56.346377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:13.523 [2024-12-12 19:35:56.346387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.523 [2024-12-12 19:35:56.348398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.523 [2024-12-12 19:35:56.348440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:13.523 BaseBdev1 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.523 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 BaseBdev2_malloc 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 true 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 [2024-12-12 19:35:56.415588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:13.783 [2024-12-12 19:35:56.415684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.783 [2024-12-12 19:35:56.415708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:13.783 [2024-12-12 19:35:56.415719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.783 [2024-12-12 19:35:56.418096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.783 [2024-12-12 19:35:56.418179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:13.783 BaseBdev2 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 [2024-12-12 19:35:56.427642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.783 [2024-12-12 19:35:56.429782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:13.783 [2024-12-12 19:35:56.430093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:13.783 [2024-12-12 19:35:56.430160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:13.783 [2024-12-12 19:35:56.430567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:13.783 [2024-12-12 19:35:56.430827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:13.783 [2024-12-12 19:35:56.430884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:13.783 [2024-12-12 19:35:56.431154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.783 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.784 "name": "raid_bdev1", 00:07:13.784 "uuid": "6ae7b5ad-2e9a-43bc-908c-7a26574d7638", 00:07:13.784 "strip_size_kb": 64, 00:07:13.784 "state": "online", 00:07:13.784 "raid_level": "raid0", 00:07:13.784 "superblock": true, 00:07:13.784 "num_base_bdevs": 2, 00:07:13.784 "num_base_bdevs_discovered": 2, 00:07:13.784 "num_base_bdevs_operational": 2, 00:07:13.784 "base_bdevs_list": [ 00:07:13.784 { 00:07:13.784 "name": "BaseBdev1", 00:07:13.784 "uuid": "5270826c-c0fa-52e8-9ca3-78739ff10f8d", 00:07:13.784 "is_configured": true, 00:07:13.784 "data_offset": 2048, 00:07:13.784 "data_size": 63488 00:07:13.784 }, 00:07:13.784 { 00:07:13.784 "name": "BaseBdev2", 00:07:13.784 "uuid": "6ba45dd0-cfee-5533-b8b0-b69c8bd377fc", 00:07:13.784 "is_configured": true, 00:07:13.784 "data_offset": 2048, 00:07:13.784 "data_size": 63488 00:07:13.784 } 00:07:13.784 ] 00:07:13.784 }' 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.784 19:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.354 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:14.354 19:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:14.354 [2024-12-12 19:35:56.995973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:15.292 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.293 "name": "raid_bdev1", 00:07:15.293 "uuid": "6ae7b5ad-2e9a-43bc-908c-7a26574d7638", 00:07:15.293 "strip_size_kb": 64, 00:07:15.293 "state": "online", 00:07:15.293 "raid_level": "raid0", 00:07:15.293 "superblock": true, 00:07:15.293 "num_base_bdevs": 2, 00:07:15.293 "num_base_bdevs_discovered": 2, 00:07:15.293 "num_base_bdevs_operational": 2, 00:07:15.293 "base_bdevs_list": [ 00:07:15.293 { 00:07:15.293 "name": "BaseBdev1", 00:07:15.293 "uuid": "5270826c-c0fa-52e8-9ca3-78739ff10f8d", 00:07:15.293 "is_configured": true, 00:07:15.293 "data_offset": 2048, 00:07:15.293 "data_size": 63488 00:07:15.293 }, 00:07:15.293 { 00:07:15.293 "name": "BaseBdev2", 00:07:15.293 "uuid": "6ba45dd0-cfee-5533-b8b0-b69c8bd377fc", 00:07:15.293 "is_configured": true, 00:07:15.293 "data_offset": 2048, 00:07:15.293 "data_size": 63488 00:07:15.293 } 00:07:15.293 ] 00:07:15.293 }' 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.293 19:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.552 19:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:15.552 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.552 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.552 [2024-12-12 19:35:58.335665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.552 [2024-12-12 19:35:58.335702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.552 [2024-12-12 19:35:58.338470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.552 [2024-12-12 19:35:58.338513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.552 [2024-12-12 19:35:58.338553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.552 [2024-12-12 19:35:58.338565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:15.552 { 00:07:15.552 "results": [ 00:07:15.552 { 00:07:15.552 "job": "raid_bdev1", 00:07:15.552 "core_mask": "0x1", 00:07:15.552 "workload": "randrw", 00:07:15.552 "percentage": 50, 00:07:15.552 "status": "finished", 00:07:15.552 "queue_depth": 1, 00:07:15.552 "io_size": 131072, 00:07:15.552 "runtime": 1.3406, 00:07:15.552 "iops": 15490.079069073548, 00:07:15.552 "mibps": 1936.2598836341936, 00:07:15.552 "io_failed": 1, 00:07:15.552 "io_timeout": 0, 00:07:15.552 "avg_latency_us": 89.36059296292846, 00:07:15.552 "min_latency_us": 27.165065502183406, 00:07:15.552 "max_latency_us": 1387.989519650655 00:07:15.552 } 00:07:15.552 ], 00:07:15.552 "core_count": 1 00:07:15.552 } 00:07:15.552 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.552 19:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63124 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63124 ']' 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63124 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63124 00:07:15.553 killing process with pid 63124 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63124' 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63124 00:07:15.553 19:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63124 00:07:15.553 [2024-12-12 19:35:58.370079] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.812 [2024-12-12 19:35:58.506916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XwdSNb0MBE 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.194 ************************************ 00:07:17.194 END TEST raid_read_error_test 00:07:17.194 ************************************ 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:17.194 00:07:17.194 real 0m4.354s 00:07:17.194 user 0m5.183s 00:07:17.194 sys 0m0.539s 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.194 19:35:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 19:35:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:17.194 19:35:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.194 19:35:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.194 19:35:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 ************************************ 00:07:17.194 START TEST raid_write_error_test 00:07:17.194 ************************************ 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TTJfn2LLr6 00:07:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63264 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63264 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63264 ']' 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 19:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:17.194 [2024-12-12 19:35:59.876482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:17.194 [2024-12-12 19:35:59.876611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63264 ] 00:07:17.454 [2024-12-12 19:36:00.049589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.454 [2024-12-12 19:36:00.165253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.713 [2024-12-12 19:36:00.369656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.713 [2024-12-12 19:36:00.369717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 BaseBdev1_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 true 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 [2024-12-12 19:36:00.763989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.973 [2024-12-12 19:36:00.764045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.973 [2024-12-12 19:36:00.764081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.973 [2024-12-12 19:36:00.764092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.973 [2024-12-12 19:36:00.766192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.973 [2024-12-12 19:36:00.766234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.973 BaseBdev1 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 BaseBdev2_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 true 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.973 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.233 [2024-12-12 19:36:00.816742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:18.233 [2024-12-12 19:36:00.816799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.233 [2024-12-12 19:36:00.816816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.233 [2024-12-12 19:36:00.816827] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.233 [2024-12-12 19:36:00.819008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.233 [2024-12-12 19:36:00.819046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:18.233 BaseBdev2 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.233 [2024-12-12 19:36:00.824782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.233 [2024-12-12 19:36:00.826657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.233 [2024-12-12 19:36:00.826839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.233 [2024-12-12 19:36:00.826856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.233 [2024-12-12 19:36:00.827072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:18.233 [2024-12-12 19:36:00.827242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.233 [2024-12-12 19:36:00.827253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:18.233 [2024-12-12 19:36:00.827406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.233 "name": "raid_bdev1", 00:07:18.233 "uuid": "9056d9cb-9e5c-4ea9-ae40-828cce33bf3a", 00:07:18.233 "strip_size_kb": 64, 00:07:18.233 "state": "online", 00:07:18.233 "raid_level": "raid0", 00:07:18.233 "superblock": true, 00:07:18.233 "num_base_bdevs": 2, 00:07:18.233 "num_base_bdevs_discovered": 2, 00:07:18.233 "num_base_bdevs_operational": 2, 00:07:18.233 "base_bdevs_list": [ 00:07:18.233 { 00:07:18.233 "name": "BaseBdev1", 00:07:18.233 "uuid": "16477fa2-2e10-5fb5-8198-8ef23a737797", 00:07:18.233 "is_configured": true, 00:07:18.233 "data_offset": 2048, 00:07:18.233 "data_size": 63488 00:07:18.233 }, 00:07:18.233 { 00:07:18.233 "name": "BaseBdev2", 00:07:18.233 "uuid": "f204bedd-f519-58dc-ba32-bc9ee4990f70", 00:07:18.233 "is_configured": true, 00:07:18.233 "data_offset": 2048, 00:07:18.233 "data_size": 63488 00:07:18.233 } 00:07:18.233 ] 00:07:18.233 }' 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.233 19:36:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.493 19:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:18.493 19:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.752 [2024-12-12 19:36:01.365015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.692 "name": "raid_bdev1", 00:07:19.692 "uuid": "9056d9cb-9e5c-4ea9-ae40-828cce33bf3a", 00:07:19.692 "strip_size_kb": 64, 00:07:19.692 "state": "online", 00:07:19.692 "raid_level": "raid0", 00:07:19.692 "superblock": true, 00:07:19.692 "num_base_bdevs": 2, 00:07:19.692 "num_base_bdevs_discovered": 2, 00:07:19.692 "num_base_bdevs_operational": 2, 00:07:19.692 "base_bdevs_list": [ 00:07:19.692 { 00:07:19.692 "name": "BaseBdev1", 00:07:19.692 "uuid": "16477fa2-2e10-5fb5-8198-8ef23a737797", 00:07:19.692 "is_configured": true, 00:07:19.692 "data_offset": 2048, 00:07:19.692 "data_size": 63488 00:07:19.692 }, 00:07:19.692 { 00:07:19.692 "name": "BaseBdev2", 00:07:19.692 "uuid": "f204bedd-f519-58dc-ba32-bc9ee4990f70", 00:07:19.692 "is_configured": true, 00:07:19.692 "data_offset": 2048, 00:07:19.692 "data_size": 63488 00:07:19.692 } 00:07:19.692 ] 00:07:19.692 }' 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.692 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.952 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.952 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.952 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.952 [2024-12-12 19:36:02.744935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.952 [2024-12-12 19:36:02.745031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.952 [2024-12-12 19:36:02.747913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.952 [2024-12-12 19:36:02.748008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.952 [2024-12-12 19:36:02.748076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.953 [2024-12-12 19:36:02.748137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63264 00:07:19.953 { 00:07:19.953 "results": [ 00:07:19.953 { 00:07:19.953 "job": "raid_bdev1", 00:07:19.953 "core_mask": "0x1", 00:07:19.953 "workload": "randrw", 00:07:19.953 "percentage": 50, 00:07:19.953 "status": "finished", 00:07:19.953 "queue_depth": 1, 00:07:19.953 "io_size": 131072, 00:07:19.953 "runtime": 1.381017, 00:07:19.953 "iops": 15612.407378040965, 00:07:19.953 "mibps": 1951.5509222551207, 00:07:19.953 "io_failed": 1, 00:07:19.953 "io_timeout": 0, 00:07:19.953 "avg_latency_us": 88.64656720601381, 00:07:19.953 "min_latency_us": 27.053275109170304, 00:07:19.953 "max_latency_us": 1373.6803493449781 00:07:19.953 } 00:07:19.953 ], 00:07:19.953 "core_count": 1 00:07:19.953 } 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63264 ']' 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63264 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63264 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.953 killing process with pid 63264 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63264' 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63264 00:07:19.953 19:36:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63264 00:07:19.953 [2024-12-12 19:36:02.792066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.213 [2024-12-12 19:36:02.929056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TTJfn2LLr6 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:21.617 00:07:21.617 real 0m4.335s 00:07:21.617 user 0m5.209s 00:07:21.617 sys 0m0.529s 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.617 ************************************ 00:07:21.617 END TEST raid_write_error_test 00:07:21.617 ************************************ 00:07:21.617 19:36:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.617 19:36:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:21.617 19:36:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:21.617 19:36:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.617 19:36:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.617 19:36:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.617 ************************************ 00:07:21.617 START TEST raid_state_function_test 00:07:21.617 ************************************ 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:21.617 Process raid pid: 63408 00:07:21.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63408 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63408' 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63408 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63408 ']' 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.617 19:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.617 [2024-12-12 19:36:04.261941] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:21.617 [2024-12-12 19:36:04.262052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.617 [2024-12-12 19:36:04.434004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.877 [2024-12-12 19:36:04.546778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.137 [2024-12-12 19:36:04.749709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.137 [2024-12-12 19:36:04.749848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.402 [2024-12-12 19:36:05.109551] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.402 [2024-12-12 19:36:05.109611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.402 [2024-12-12 19:36:05.109622] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.402 [2024-12-12 19:36:05.109633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.402 "name": "Existed_Raid", 00:07:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.402 "strip_size_kb": 64, 00:07:22.402 "state": "configuring", 00:07:22.402 "raid_level": "concat", 00:07:22.402 "superblock": false, 00:07:22.402 "num_base_bdevs": 2, 00:07:22.402 "num_base_bdevs_discovered": 0, 00:07:22.402 "num_base_bdevs_operational": 2, 00:07:22.402 "base_bdevs_list": [ 00:07:22.402 { 00:07:22.402 "name": "BaseBdev1", 00:07:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.402 "is_configured": false, 00:07:22.402 "data_offset": 0, 00:07:22.402 "data_size": 0 00:07:22.402 }, 00:07:22.402 { 00:07:22.402 "name": "BaseBdev2", 00:07:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.402 "is_configured": false, 00:07:22.402 "data_offset": 0, 00:07:22.402 "data_size": 0 00:07:22.402 } 00:07:22.402 ] 00:07:22.402 }' 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.402 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 [2024-12-12 19:36:05.524762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.972 [2024-12-12 19:36:05.524851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 [2024-12-12 19:36:05.532736] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.972 [2024-12-12 19:36:05.532776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.972 [2024-12-12 19:36:05.532785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.972 [2024-12-12 19:36:05.532812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 [2024-12-12 19:36:05.575592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.972 BaseBdev1 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 [ 00:07:22.972 { 00:07:22.972 "name": "BaseBdev1", 00:07:22.972 "aliases": [ 00:07:22.972 "fce08172-e4cd-4d2e-971e-2b7cf839b65f" 00:07:22.972 ], 00:07:22.972 "product_name": "Malloc disk", 00:07:22.972 "block_size": 512, 00:07:22.972 "num_blocks": 65536, 00:07:22.972 "uuid": "fce08172-e4cd-4d2e-971e-2b7cf839b65f", 00:07:22.972 "assigned_rate_limits": { 00:07:22.972 "rw_ios_per_sec": 0, 00:07:22.972 "rw_mbytes_per_sec": 0, 00:07:22.972 "r_mbytes_per_sec": 0, 00:07:22.972 "w_mbytes_per_sec": 0 00:07:22.972 }, 00:07:22.972 "claimed": true, 00:07:22.972 "claim_type": "exclusive_write", 00:07:22.972 "zoned": false, 00:07:22.972 "supported_io_types": { 00:07:22.972 "read": true, 00:07:22.972 "write": true, 00:07:22.972 "unmap": true, 00:07:22.972 "flush": true, 00:07:22.972 "reset": true, 00:07:22.972 "nvme_admin": false, 00:07:22.972 "nvme_io": false, 00:07:22.972 "nvme_io_md": false, 00:07:22.972 "write_zeroes": true, 00:07:22.972 "zcopy": true, 00:07:22.972 "get_zone_info": false, 00:07:22.972 "zone_management": false, 00:07:22.972 "zone_append": false, 00:07:22.972 "compare": false, 00:07:22.972 "compare_and_write": false, 00:07:22.972 "abort": true, 00:07:22.972 "seek_hole": false, 00:07:22.972 "seek_data": false, 00:07:22.972 "copy": true, 00:07:22.972 "nvme_iov_md": false 00:07:22.972 }, 00:07:22.972 "memory_domains": [ 00:07:22.972 { 00:07:22.972 "dma_device_id": "system", 00:07:22.972 "dma_device_type": 1 00:07:22.972 }, 00:07:22.972 { 00:07:22.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.972 "dma_device_type": 2 00:07:22.972 } 00:07:22.972 ], 00:07:22.972 "driver_specific": {} 00:07:22.972 } 00:07:22.972 ] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.972 "name": "Existed_Raid", 00:07:22.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.972 "strip_size_kb": 64, 00:07:22.972 "state": "configuring", 00:07:22.972 "raid_level": "concat", 00:07:22.972 "superblock": false, 00:07:22.972 "num_base_bdevs": 2, 00:07:22.972 "num_base_bdevs_discovered": 1, 00:07:22.972 "num_base_bdevs_operational": 2, 00:07:22.972 "base_bdevs_list": [ 00:07:22.972 { 00:07:22.972 "name": "BaseBdev1", 00:07:22.972 "uuid": "fce08172-e4cd-4d2e-971e-2b7cf839b65f", 00:07:22.972 "is_configured": true, 00:07:22.972 "data_offset": 0, 00:07:22.972 "data_size": 65536 00:07:22.972 }, 00:07:22.972 { 00:07:22.972 "name": "BaseBdev2", 00:07:22.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.972 "is_configured": false, 00:07:22.972 "data_offset": 0, 00:07:22.972 "data_size": 0 00:07:22.972 } 00:07:22.972 ] 00:07:22.972 }' 00:07:22.972 19:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.973 19:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 [2024-12-12 19:36:06.050804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.232 [2024-12-12 19:36:06.050852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.232 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 [2024-12-12 19:36:06.058837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.232 [2024-12-12 19:36:06.060511] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.232 [2024-12-12 19:36:06.060610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.233 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.492 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.492 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.492 "name": "Existed_Raid", 00:07:23.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.492 "strip_size_kb": 64, 00:07:23.492 "state": "configuring", 00:07:23.492 "raid_level": "concat", 00:07:23.492 "superblock": false, 00:07:23.492 "num_base_bdevs": 2, 00:07:23.492 "num_base_bdevs_discovered": 1, 00:07:23.492 "num_base_bdevs_operational": 2, 00:07:23.492 "base_bdevs_list": [ 00:07:23.492 { 00:07:23.492 "name": "BaseBdev1", 00:07:23.492 "uuid": "fce08172-e4cd-4d2e-971e-2b7cf839b65f", 00:07:23.492 "is_configured": true, 00:07:23.492 "data_offset": 0, 00:07:23.492 "data_size": 65536 00:07:23.492 }, 00:07:23.492 { 00:07:23.492 "name": "BaseBdev2", 00:07:23.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.492 "is_configured": false, 00:07:23.492 "data_offset": 0, 00:07:23.492 "data_size": 0 00:07:23.492 } 00:07:23.492 ] 00:07:23.492 }' 00:07:23.492 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.492 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 [2024-12-12 19:36:06.489992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.752 [2024-12-12 19:36:06.490102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.752 [2024-12-12 19:36:06.490115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:23.752 [2024-12-12 19:36:06.490407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.752 [2024-12-12 19:36:06.490648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.752 [2024-12-12 19:36:06.490664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.752 [2024-12-12 19:36:06.490916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.752 BaseBdev2 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.752 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 [ 00:07:23.752 { 00:07:23.752 "name": "BaseBdev2", 00:07:23.752 "aliases": [ 00:07:23.752 "c301b123-f8bf-4aad-b634-0edf992378b1" 00:07:23.752 ], 00:07:23.752 "product_name": "Malloc disk", 00:07:23.752 "block_size": 512, 00:07:23.752 "num_blocks": 65536, 00:07:23.752 "uuid": "c301b123-f8bf-4aad-b634-0edf992378b1", 00:07:23.752 "assigned_rate_limits": { 00:07:23.752 "rw_ios_per_sec": 0, 00:07:23.752 "rw_mbytes_per_sec": 0, 00:07:23.752 "r_mbytes_per_sec": 0, 00:07:23.752 "w_mbytes_per_sec": 0 00:07:23.752 }, 00:07:23.753 "claimed": true, 00:07:23.753 "claim_type": "exclusive_write", 00:07:23.753 "zoned": false, 00:07:23.753 "supported_io_types": { 00:07:23.753 "read": true, 00:07:23.753 "write": true, 00:07:23.753 "unmap": true, 00:07:23.753 "flush": true, 00:07:23.753 "reset": true, 00:07:23.753 "nvme_admin": false, 00:07:23.753 "nvme_io": false, 00:07:23.753 "nvme_io_md": false, 00:07:23.753 "write_zeroes": true, 00:07:23.753 "zcopy": true, 00:07:23.753 "get_zone_info": false, 00:07:23.753 "zone_management": false, 00:07:23.753 "zone_append": false, 00:07:23.753 "compare": false, 00:07:23.753 "compare_and_write": false, 00:07:23.753 "abort": true, 00:07:23.753 "seek_hole": false, 00:07:23.753 "seek_data": false, 00:07:23.753 "copy": true, 00:07:23.753 "nvme_iov_md": false 00:07:23.753 }, 00:07:23.753 "memory_domains": [ 00:07:23.753 { 00:07:23.753 "dma_device_id": "system", 00:07:23.753 "dma_device_type": 1 00:07:23.753 }, 00:07:23.753 { 00:07:23.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.753 "dma_device_type": 2 00:07:23.753 } 00:07:23.753 ], 00:07:23.753 "driver_specific": {} 00:07:23.753 } 00:07:23.753 ] 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.753 "name": "Existed_Raid", 00:07:23.753 "uuid": "11299786-8892-4638-8234-3760899b1916", 00:07:23.753 "strip_size_kb": 64, 00:07:23.753 "state": "online", 00:07:23.753 "raid_level": "concat", 00:07:23.753 "superblock": false, 00:07:23.753 "num_base_bdevs": 2, 00:07:23.753 "num_base_bdevs_discovered": 2, 00:07:23.753 "num_base_bdevs_operational": 2, 00:07:23.753 "base_bdevs_list": [ 00:07:23.753 { 00:07:23.753 "name": "BaseBdev1", 00:07:23.753 "uuid": "fce08172-e4cd-4d2e-971e-2b7cf839b65f", 00:07:23.753 "is_configured": true, 00:07:23.753 "data_offset": 0, 00:07:23.753 "data_size": 65536 00:07:23.753 }, 00:07:23.753 { 00:07:23.753 "name": "BaseBdev2", 00:07:23.753 "uuid": "c301b123-f8bf-4aad-b634-0edf992378b1", 00:07:23.753 "is_configured": true, 00:07:23.753 "data_offset": 0, 00:07:23.753 "data_size": 65536 00:07:23.753 } 00:07:23.753 ] 00:07:23.753 }' 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.753 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 [2024-12-12 19:36:06.965510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.323 "name": "Existed_Raid", 00:07:24.323 "aliases": [ 00:07:24.323 "11299786-8892-4638-8234-3760899b1916" 00:07:24.323 ], 00:07:24.323 "product_name": "Raid Volume", 00:07:24.323 "block_size": 512, 00:07:24.323 "num_blocks": 131072, 00:07:24.323 "uuid": "11299786-8892-4638-8234-3760899b1916", 00:07:24.323 "assigned_rate_limits": { 00:07:24.323 "rw_ios_per_sec": 0, 00:07:24.323 "rw_mbytes_per_sec": 0, 00:07:24.323 "r_mbytes_per_sec": 0, 00:07:24.323 "w_mbytes_per_sec": 0 00:07:24.323 }, 00:07:24.323 "claimed": false, 00:07:24.323 "zoned": false, 00:07:24.323 "supported_io_types": { 00:07:24.323 "read": true, 00:07:24.323 "write": true, 00:07:24.323 "unmap": true, 00:07:24.323 "flush": true, 00:07:24.323 "reset": true, 00:07:24.323 "nvme_admin": false, 00:07:24.323 "nvme_io": false, 00:07:24.323 "nvme_io_md": false, 00:07:24.323 "write_zeroes": true, 00:07:24.323 "zcopy": false, 00:07:24.323 "get_zone_info": false, 00:07:24.323 "zone_management": false, 00:07:24.323 "zone_append": false, 00:07:24.323 "compare": false, 00:07:24.323 "compare_and_write": false, 00:07:24.323 "abort": false, 00:07:24.323 "seek_hole": false, 00:07:24.323 "seek_data": false, 00:07:24.323 "copy": false, 00:07:24.323 "nvme_iov_md": false 00:07:24.323 }, 00:07:24.323 "memory_domains": [ 00:07:24.323 { 00:07:24.323 "dma_device_id": "system", 00:07:24.323 "dma_device_type": 1 00:07:24.323 }, 00:07:24.323 { 00:07:24.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.323 "dma_device_type": 2 00:07:24.323 }, 00:07:24.323 { 00:07:24.323 "dma_device_id": "system", 00:07:24.323 "dma_device_type": 1 00:07:24.323 }, 00:07:24.323 { 00:07:24.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.323 "dma_device_type": 2 00:07:24.323 } 00:07:24.323 ], 00:07:24.323 "driver_specific": { 00:07:24.323 "raid": { 00:07:24.323 "uuid": "11299786-8892-4638-8234-3760899b1916", 00:07:24.323 "strip_size_kb": 64, 00:07:24.323 "state": "online", 00:07:24.323 "raid_level": "concat", 00:07:24.323 "superblock": false, 00:07:24.323 "num_base_bdevs": 2, 00:07:24.323 "num_base_bdevs_discovered": 2, 00:07:24.323 "num_base_bdevs_operational": 2, 00:07:24.323 "base_bdevs_list": [ 00:07:24.323 { 00:07:24.323 "name": "BaseBdev1", 00:07:24.323 "uuid": "fce08172-e4cd-4d2e-971e-2b7cf839b65f", 00:07:24.323 "is_configured": true, 00:07:24.323 "data_offset": 0, 00:07:24.323 "data_size": 65536 00:07:24.323 }, 00:07:24.323 { 00:07:24.323 "name": "BaseBdev2", 00:07:24.323 "uuid": "c301b123-f8bf-4aad-b634-0edf992378b1", 00:07:24.323 "is_configured": true, 00:07:24.323 "data_offset": 0, 00:07:24.323 "data_size": 65536 00:07:24.323 } 00:07:24.323 ] 00:07:24.323 } 00:07:24.323 } 00:07:24.323 }' 00:07:24.323 19:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.323 BaseBdev2' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.323 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 [2024-12-12 19:36:07.196944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.583 [2024-12-12 19:36:07.197017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.583 [2024-12-12 19:36:07.197085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.583 "name": "Existed_Raid", 00:07:24.583 "uuid": "11299786-8892-4638-8234-3760899b1916", 00:07:24.583 "strip_size_kb": 64, 00:07:24.583 "state": "offline", 00:07:24.583 "raid_level": "concat", 00:07:24.583 "superblock": false, 00:07:24.583 "num_base_bdevs": 2, 00:07:24.583 "num_base_bdevs_discovered": 1, 00:07:24.583 "num_base_bdevs_operational": 1, 00:07:24.583 "base_bdevs_list": [ 00:07:24.583 { 00:07:24.583 "name": null, 00:07:24.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.583 "is_configured": false, 00:07:24.583 "data_offset": 0, 00:07:24.583 "data_size": 65536 00:07:24.583 }, 00:07:24.583 { 00:07:24.583 "name": "BaseBdev2", 00:07:24.583 "uuid": "c301b123-f8bf-4aad-b634-0edf992378b1", 00:07:24.583 "is_configured": true, 00:07:24.583 "data_offset": 0, 00:07:24.583 "data_size": 65536 00:07:24.583 } 00:07:24.583 ] 00:07:24.583 }' 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.583 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 [2024-12-12 19:36:07.754745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.153 [2024-12-12 19:36:07.754797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63408 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63408 ']' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63408 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63408 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.153 killing process with pid 63408 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63408' 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63408 00:07:25.153 [2024-12-12 19:36:07.943019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.153 19:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63408 00:07:25.153 [2024-12-12 19:36:07.961058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:26.533 00:07:26.533 real 0m4.895s 00:07:26.533 user 0m7.042s 00:07:26.533 sys 0m0.787s 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.533 ************************************ 00:07:26.533 END TEST raid_state_function_test 00:07:26.533 ************************************ 00:07:26.533 19:36:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:26.533 19:36:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:26.533 19:36:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.533 19:36:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.533 ************************************ 00:07:26.533 START TEST raid_state_function_test_sb 00:07:26.533 ************************************ 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.533 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63650 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63650' 00:07:26.534 Process raid pid: 63650 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63650 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63650 ']' 00:07:26.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.534 19:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.534 [2024-12-12 19:36:09.225252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:26.534 [2024-12-12 19:36:09.225422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.793 [2024-12-12 19:36:09.402950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.793 [2024-12-12 19:36:09.519271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.053 [2024-12-12 19:36:09.721000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.053 [2024-12-12 19:36:09.721131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.312 [2024-12-12 19:36:10.059624] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.312 [2024-12-12 19:36:10.059679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.312 [2024-12-12 19:36:10.059692] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.312 [2024-12-12 19:36:10.059703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.312 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.312 "name": "Existed_Raid", 00:07:27.312 "uuid": "4284a49d-29f1-4625-b77a-34990febf46d", 00:07:27.312 "strip_size_kb": 64, 00:07:27.312 "state": "configuring", 00:07:27.312 "raid_level": "concat", 00:07:27.312 "superblock": true, 00:07:27.312 "num_base_bdevs": 2, 00:07:27.313 "num_base_bdevs_discovered": 0, 00:07:27.313 "num_base_bdevs_operational": 2, 00:07:27.313 "base_bdevs_list": [ 00:07:27.313 { 00:07:27.313 "name": "BaseBdev1", 00:07:27.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.313 "is_configured": false, 00:07:27.313 "data_offset": 0, 00:07:27.313 "data_size": 0 00:07:27.313 }, 00:07:27.313 { 00:07:27.313 "name": "BaseBdev2", 00:07:27.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.313 "is_configured": false, 00:07:27.313 "data_offset": 0, 00:07:27.313 "data_size": 0 00:07:27.313 } 00:07:27.313 ] 00:07:27.313 }' 00:07:27.313 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.313 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.882 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.882 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.882 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.882 [2024-12-12 19:36:10.486805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.882 [2024-12-12 19:36:10.486889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:27.882 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.883 [2024-12-12 19:36:10.498779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.883 [2024-12-12 19:36:10.498876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.883 [2024-12-12 19:36:10.498902] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.883 [2024-12-12 19:36:10.498927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.883 [2024-12-12 19:36:10.544421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.883 BaseBdev1 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.883 [ 00:07:27.883 { 00:07:27.883 "name": "BaseBdev1", 00:07:27.883 "aliases": [ 00:07:27.883 "e1d4ef11-27d3-41ed-92e4-ad97c0da6993" 00:07:27.883 ], 00:07:27.883 "product_name": "Malloc disk", 00:07:27.883 "block_size": 512, 00:07:27.883 "num_blocks": 65536, 00:07:27.883 "uuid": "e1d4ef11-27d3-41ed-92e4-ad97c0da6993", 00:07:27.883 "assigned_rate_limits": { 00:07:27.883 "rw_ios_per_sec": 0, 00:07:27.883 "rw_mbytes_per_sec": 0, 00:07:27.883 "r_mbytes_per_sec": 0, 00:07:27.883 "w_mbytes_per_sec": 0 00:07:27.883 }, 00:07:27.883 "claimed": true, 00:07:27.883 "claim_type": "exclusive_write", 00:07:27.883 "zoned": false, 00:07:27.883 "supported_io_types": { 00:07:27.883 "read": true, 00:07:27.883 "write": true, 00:07:27.883 "unmap": true, 00:07:27.883 "flush": true, 00:07:27.883 "reset": true, 00:07:27.883 "nvme_admin": false, 00:07:27.883 "nvme_io": false, 00:07:27.883 "nvme_io_md": false, 00:07:27.883 "write_zeroes": true, 00:07:27.883 "zcopy": true, 00:07:27.883 "get_zone_info": false, 00:07:27.883 "zone_management": false, 00:07:27.883 "zone_append": false, 00:07:27.883 "compare": false, 00:07:27.883 "compare_and_write": false, 00:07:27.883 "abort": true, 00:07:27.883 "seek_hole": false, 00:07:27.883 "seek_data": false, 00:07:27.883 "copy": true, 00:07:27.883 "nvme_iov_md": false 00:07:27.883 }, 00:07:27.883 "memory_domains": [ 00:07:27.883 { 00:07:27.883 "dma_device_id": "system", 00:07:27.883 "dma_device_type": 1 00:07:27.883 }, 00:07:27.883 { 00:07:27.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.883 "dma_device_type": 2 00:07:27.883 } 00:07:27.883 ], 00:07:27.883 "driver_specific": {} 00:07:27.883 } 00:07:27.883 ] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.883 "name": "Existed_Raid", 00:07:27.883 "uuid": "34d3b296-f1c3-48f8-80d9-0ca0a0d52dc1", 00:07:27.883 "strip_size_kb": 64, 00:07:27.883 "state": "configuring", 00:07:27.883 "raid_level": "concat", 00:07:27.883 "superblock": true, 00:07:27.883 "num_base_bdevs": 2, 00:07:27.883 "num_base_bdevs_discovered": 1, 00:07:27.883 "num_base_bdevs_operational": 2, 00:07:27.883 "base_bdevs_list": [ 00:07:27.883 { 00:07:27.883 "name": "BaseBdev1", 00:07:27.883 "uuid": "e1d4ef11-27d3-41ed-92e4-ad97c0da6993", 00:07:27.883 "is_configured": true, 00:07:27.883 "data_offset": 2048, 00:07:27.883 "data_size": 63488 00:07:27.883 }, 00:07:27.883 { 00:07:27.883 "name": "BaseBdev2", 00:07:27.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.883 "is_configured": false, 00:07:27.883 "data_offset": 0, 00:07:27.883 "data_size": 0 00:07:27.883 } 00:07:27.883 ] 00:07:27.883 }' 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.883 19:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 [2024-12-12 19:36:11.007704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.453 [2024-12-12 19:36:11.007823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 [2024-12-12 19:36:11.019784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.453 [2024-12-12 19:36:11.021687] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.453 [2024-12-12 19:36:11.021734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.453 "name": "Existed_Raid", 00:07:28.453 "uuid": "3be85785-83bb-4c55-aea2-4e0fac54d4a4", 00:07:28.453 "strip_size_kb": 64, 00:07:28.453 "state": "configuring", 00:07:28.453 "raid_level": "concat", 00:07:28.453 "superblock": true, 00:07:28.453 "num_base_bdevs": 2, 00:07:28.453 "num_base_bdevs_discovered": 1, 00:07:28.453 "num_base_bdevs_operational": 2, 00:07:28.453 "base_bdevs_list": [ 00:07:28.453 { 00:07:28.453 "name": "BaseBdev1", 00:07:28.453 "uuid": "e1d4ef11-27d3-41ed-92e4-ad97c0da6993", 00:07:28.453 "is_configured": true, 00:07:28.453 "data_offset": 2048, 00:07:28.453 "data_size": 63488 00:07:28.453 }, 00:07:28.453 { 00:07:28.453 "name": "BaseBdev2", 00:07:28.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.453 "is_configured": false, 00:07:28.453 "data_offset": 0, 00:07:28.453 "data_size": 0 00:07:28.453 } 00:07:28.453 ] 00:07:28.453 }' 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.453 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.714 [2024-12-12 19:36:11.457814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.714 [2024-12-12 19:36:11.458146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.714 [2024-12-12 19:36:11.458199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.714 [2024-12-12 19:36:11.458502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.714 [2024-12-12 19:36:11.458725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.714 BaseBdev2 00:07:28.714 [2024-12-12 19:36:11.458774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:28.714 [2024-12-12 19:36:11.458939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.714 [ 00:07:28.714 { 00:07:28.714 "name": "BaseBdev2", 00:07:28.714 "aliases": [ 00:07:28.714 "e5e75b1e-c62e-490e-8118-a6226db17029" 00:07:28.714 ], 00:07:28.714 "product_name": "Malloc disk", 00:07:28.714 "block_size": 512, 00:07:28.714 "num_blocks": 65536, 00:07:28.714 "uuid": "e5e75b1e-c62e-490e-8118-a6226db17029", 00:07:28.714 "assigned_rate_limits": { 00:07:28.714 "rw_ios_per_sec": 0, 00:07:28.714 "rw_mbytes_per_sec": 0, 00:07:28.714 "r_mbytes_per_sec": 0, 00:07:28.714 "w_mbytes_per_sec": 0 00:07:28.714 }, 00:07:28.714 "claimed": true, 00:07:28.714 "claim_type": "exclusive_write", 00:07:28.714 "zoned": false, 00:07:28.714 "supported_io_types": { 00:07:28.714 "read": true, 00:07:28.714 "write": true, 00:07:28.714 "unmap": true, 00:07:28.714 "flush": true, 00:07:28.714 "reset": true, 00:07:28.714 "nvme_admin": false, 00:07:28.714 "nvme_io": false, 00:07:28.714 "nvme_io_md": false, 00:07:28.714 "write_zeroes": true, 00:07:28.714 "zcopy": true, 00:07:28.714 "get_zone_info": false, 00:07:28.714 "zone_management": false, 00:07:28.714 "zone_append": false, 00:07:28.714 "compare": false, 00:07:28.714 "compare_and_write": false, 00:07:28.714 "abort": true, 00:07:28.714 "seek_hole": false, 00:07:28.714 "seek_data": false, 00:07:28.714 "copy": true, 00:07:28.714 "nvme_iov_md": false 00:07:28.714 }, 00:07:28.714 "memory_domains": [ 00:07:28.714 { 00:07:28.714 "dma_device_id": "system", 00:07:28.714 "dma_device_type": 1 00:07:28.714 }, 00:07:28.714 { 00:07:28.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.714 "dma_device_type": 2 00:07:28.714 } 00:07:28.714 ], 00:07:28.714 "driver_specific": {} 00:07:28.714 } 00:07:28.714 ] 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.714 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.715 "name": "Existed_Raid", 00:07:28.715 "uuid": "3be85785-83bb-4c55-aea2-4e0fac54d4a4", 00:07:28.715 "strip_size_kb": 64, 00:07:28.715 "state": "online", 00:07:28.715 "raid_level": "concat", 00:07:28.715 "superblock": true, 00:07:28.715 "num_base_bdevs": 2, 00:07:28.715 "num_base_bdevs_discovered": 2, 00:07:28.715 "num_base_bdevs_operational": 2, 00:07:28.715 "base_bdevs_list": [ 00:07:28.715 { 00:07:28.715 "name": "BaseBdev1", 00:07:28.715 "uuid": "e1d4ef11-27d3-41ed-92e4-ad97c0da6993", 00:07:28.715 "is_configured": true, 00:07:28.715 "data_offset": 2048, 00:07:28.715 "data_size": 63488 00:07:28.715 }, 00:07:28.715 { 00:07:28.715 "name": "BaseBdev2", 00:07:28.715 "uuid": "e5e75b1e-c62e-490e-8118-a6226db17029", 00:07:28.715 "is_configured": true, 00:07:28.715 "data_offset": 2048, 00:07:28.715 "data_size": 63488 00:07:28.715 } 00:07:28.715 ] 00:07:28.715 }' 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.715 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.284 [2024-12-12 19:36:11.921390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.284 "name": "Existed_Raid", 00:07:29.284 "aliases": [ 00:07:29.284 "3be85785-83bb-4c55-aea2-4e0fac54d4a4" 00:07:29.284 ], 00:07:29.284 "product_name": "Raid Volume", 00:07:29.284 "block_size": 512, 00:07:29.284 "num_blocks": 126976, 00:07:29.284 "uuid": "3be85785-83bb-4c55-aea2-4e0fac54d4a4", 00:07:29.284 "assigned_rate_limits": { 00:07:29.284 "rw_ios_per_sec": 0, 00:07:29.284 "rw_mbytes_per_sec": 0, 00:07:29.284 "r_mbytes_per_sec": 0, 00:07:29.284 "w_mbytes_per_sec": 0 00:07:29.284 }, 00:07:29.284 "claimed": false, 00:07:29.284 "zoned": false, 00:07:29.284 "supported_io_types": { 00:07:29.284 "read": true, 00:07:29.284 "write": true, 00:07:29.284 "unmap": true, 00:07:29.284 "flush": true, 00:07:29.284 "reset": true, 00:07:29.284 "nvme_admin": false, 00:07:29.284 "nvme_io": false, 00:07:29.284 "nvme_io_md": false, 00:07:29.284 "write_zeroes": true, 00:07:29.284 "zcopy": false, 00:07:29.284 "get_zone_info": false, 00:07:29.284 "zone_management": false, 00:07:29.284 "zone_append": false, 00:07:29.284 "compare": false, 00:07:29.284 "compare_and_write": false, 00:07:29.284 "abort": false, 00:07:29.284 "seek_hole": false, 00:07:29.284 "seek_data": false, 00:07:29.284 "copy": false, 00:07:29.284 "nvme_iov_md": false 00:07:29.284 }, 00:07:29.284 "memory_domains": [ 00:07:29.284 { 00:07:29.284 "dma_device_id": "system", 00:07:29.284 "dma_device_type": 1 00:07:29.284 }, 00:07:29.284 { 00:07:29.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.284 "dma_device_type": 2 00:07:29.284 }, 00:07:29.284 { 00:07:29.284 "dma_device_id": "system", 00:07:29.284 "dma_device_type": 1 00:07:29.284 }, 00:07:29.284 { 00:07:29.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.284 "dma_device_type": 2 00:07:29.284 } 00:07:29.284 ], 00:07:29.284 "driver_specific": { 00:07:29.284 "raid": { 00:07:29.284 "uuid": "3be85785-83bb-4c55-aea2-4e0fac54d4a4", 00:07:29.284 "strip_size_kb": 64, 00:07:29.284 "state": "online", 00:07:29.284 "raid_level": "concat", 00:07:29.284 "superblock": true, 00:07:29.284 "num_base_bdevs": 2, 00:07:29.284 "num_base_bdevs_discovered": 2, 00:07:29.284 "num_base_bdevs_operational": 2, 00:07:29.284 "base_bdevs_list": [ 00:07:29.284 { 00:07:29.284 "name": "BaseBdev1", 00:07:29.284 "uuid": "e1d4ef11-27d3-41ed-92e4-ad97c0da6993", 00:07:29.284 "is_configured": true, 00:07:29.284 "data_offset": 2048, 00:07:29.284 "data_size": 63488 00:07:29.284 }, 00:07:29.284 { 00:07:29.284 "name": "BaseBdev2", 00:07:29.284 "uuid": "e5e75b1e-c62e-490e-8118-a6226db17029", 00:07:29.284 "is_configured": true, 00:07:29.284 "data_offset": 2048, 00:07:29.284 "data_size": 63488 00:07:29.284 } 00:07:29.284 ] 00:07:29.284 } 00:07:29.284 } 00:07:29.284 }' 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.284 BaseBdev2' 00:07:29.284 19:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.285 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 [2024-12-12 19:36:12.136783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.545 [2024-12-12 19:36:12.136820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.545 [2024-12-12 19:36:12.136870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.545 "name": "Existed_Raid", 00:07:29.545 "uuid": "3be85785-83bb-4c55-aea2-4e0fac54d4a4", 00:07:29.545 "strip_size_kb": 64, 00:07:29.545 "state": "offline", 00:07:29.545 "raid_level": "concat", 00:07:29.545 "superblock": true, 00:07:29.545 "num_base_bdevs": 2, 00:07:29.545 "num_base_bdevs_discovered": 1, 00:07:29.545 "num_base_bdevs_operational": 1, 00:07:29.545 "base_bdevs_list": [ 00:07:29.545 { 00:07:29.545 "name": null, 00:07:29.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.545 "is_configured": false, 00:07:29.545 "data_offset": 0, 00:07:29.545 "data_size": 63488 00:07:29.545 }, 00:07:29.545 { 00:07:29.545 "name": "BaseBdev2", 00:07:29.545 "uuid": "e5e75b1e-c62e-490e-8118-a6226db17029", 00:07:29.545 "is_configured": true, 00:07:29.545 "data_offset": 2048, 00:07:29.545 "data_size": 63488 00:07:29.545 } 00:07:29.545 ] 00:07:29.545 }' 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.545 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.804 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.064 [2024-12-12 19:36:12.694710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.064 [2024-12-12 19:36:12.694808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.064 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63650 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63650 ']' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63650 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63650 00:07:30.065 killing process with pid 63650 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63650' 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63650 00:07:30.065 [2024-12-12 19:36:12.879408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.065 19:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63650 00:07:30.065 [2024-12-12 19:36:12.895784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.455 19:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.455 00:07:31.455 real 0m4.844s 00:07:31.455 user 0m6.945s 00:07:31.455 sys 0m0.816s 00:07:31.455 19:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.455 19:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.455 ************************************ 00:07:31.455 END TEST raid_state_function_test_sb 00:07:31.455 ************************************ 00:07:31.455 19:36:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:31.455 19:36:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:31.455 19:36:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.455 19:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.455 ************************************ 00:07:31.455 START TEST raid_superblock_test 00:07:31.455 ************************************ 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63902 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63902 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63902 ']' 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.455 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.455 [2024-12-12 19:36:14.135651] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:31.455 [2024-12-12 19:36:14.135767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63902 ] 00:07:31.732 [2024-12-12 19:36:14.307795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.732 [2024-12-12 19:36:14.420998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.992 [2024-12-12 19:36:14.618486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.992 [2024-12-12 19:36:14.618521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 malloc1 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 [2024-12-12 19:36:14.984472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.252 [2024-12-12 19:36:14.984608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.252 [2024-12-12 19:36:14.984650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.252 [2024-12-12 19:36:14.984691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.252 [2024-12-12 19:36:14.986804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.252 [2024-12-12 19:36:14.986888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.252 pt1 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 malloc2 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 [2024-12-12 19:36:15.035703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.252 [2024-12-12 19:36:15.035813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.252 [2024-12-12 19:36:15.035853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:32.252 [2024-12-12 19:36:15.035882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.252 [2024-12-12 19:36:15.037916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.252 [2024-12-12 19:36:15.037982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.252 pt2 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 [2024-12-12 19:36:15.047737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.252 [2024-12-12 19:36:15.049492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.252 [2024-12-12 19:36:15.049716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.252 [2024-12-12 19:36:15.049734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.252 [2024-12-12 19:36:15.049968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.252 [2024-12-12 19:36:15.050109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.252 [2024-12-12 19:36:15.050120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:32.252 [2024-12-12 19:36:15.050261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.516 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.516 "name": "raid_bdev1", 00:07:32.516 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:32.516 "strip_size_kb": 64, 00:07:32.516 "state": "online", 00:07:32.516 "raid_level": "concat", 00:07:32.516 "superblock": true, 00:07:32.516 "num_base_bdevs": 2, 00:07:32.516 "num_base_bdevs_discovered": 2, 00:07:32.516 "num_base_bdevs_operational": 2, 00:07:32.516 "base_bdevs_list": [ 00:07:32.516 { 00:07:32.516 "name": "pt1", 00:07:32.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.516 "is_configured": true, 00:07:32.516 "data_offset": 2048, 00:07:32.516 "data_size": 63488 00:07:32.516 }, 00:07:32.516 { 00:07:32.516 "name": "pt2", 00:07:32.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.516 "is_configured": true, 00:07:32.516 "data_offset": 2048, 00:07:32.516 "data_size": 63488 00:07:32.516 } 00:07:32.516 ] 00:07:32.516 }' 00:07:32.516 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.516 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.776 [2024-12-12 19:36:15.515216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.776 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.776 "name": "raid_bdev1", 00:07:32.776 "aliases": [ 00:07:32.776 "f53a777a-e09f-4a49-96e7-b273b1f6a123" 00:07:32.776 ], 00:07:32.776 "product_name": "Raid Volume", 00:07:32.776 "block_size": 512, 00:07:32.776 "num_blocks": 126976, 00:07:32.776 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:32.776 "assigned_rate_limits": { 00:07:32.776 "rw_ios_per_sec": 0, 00:07:32.776 "rw_mbytes_per_sec": 0, 00:07:32.776 "r_mbytes_per_sec": 0, 00:07:32.776 "w_mbytes_per_sec": 0 00:07:32.776 }, 00:07:32.776 "claimed": false, 00:07:32.776 "zoned": false, 00:07:32.776 "supported_io_types": { 00:07:32.776 "read": true, 00:07:32.776 "write": true, 00:07:32.776 "unmap": true, 00:07:32.776 "flush": true, 00:07:32.776 "reset": true, 00:07:32.776 "nvme_admin": false, 00:07:32.776 "nvme_io": false, 00:07:32.776 "nvme_io_md": false, 00:07:32.776 "write_zeroes": true, 00:07:32.776 "zcopy": false, 00:07:32.776 "get_zone_info": false, 00:07:32.776 "zone_management": false, 00:07:32.776 "zone_append": false, 00:07:32.776 "compare": false, 00:07:32.776 "compare_and_write": false, 00:07:32.776 "abort": false, 00:07:32.776 "seek_hole": false, 00:07:32.776 "seek_data": false, 00:07:32.776 "copy": false, 00:07:32.776 "nvme_iov_md": false 00:07:32.776 }, 00:07:32.776 "memory_domains": [ 00:07:32.776 { 00:07:32.776 "dma_device_id": "system", 00:07:32.776 "dma_device_type": 1 00:07:32.776 }, 00:07:32.776 { 00:07:32.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.776 "dma_device_type": 2 00:07:32.776 }, 00:07:32.776 { 00:07:32.776 "dma_device_id": "system", 00:07:32.776 "dma_device_type": 1 00:07:32.776 }, 00:07:32.776 { 00:07:32.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.777 "dma_device_type": 2 00:07:32.777 } 00:07:32.777 ], 00:07:32.777 "driver_specific": { 00:07:32.777 "raid": { 00:07:32.777 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:32.777 "strip_size_kb": 64, 00:07:32.777 "state": "online", 00:07:32.777 "raid_level": "concat", 00:07:32.777 "superblock": true, 00:07:32.777 "num_base_bdevs": 2, 00:07:32.777 "num_base_bdevs_discovered": 2, 00:07:32.777 "num_base_bdevs_operational": 2, 00:07:32.777 "base_bdevs_list": [ 00:07:32.777 { 00:07:32.777 "name": "pt1", 00:07:32.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.777 "is_configured": true, 00:07:32.777 "data_offset": 2048, 00:07:32.777 "data_size": 63488 00:07:32.777 }, 00:07:32.777 { 00:07:32.777 "name": "pt2", 00:07:32.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.777 "is_configured": true, 00:07:32.777 "data_offset": 2048, 00:07:32.777 "data_size": 63488 00:07:32.777 } 00:07:32.777 ] 00:07:32.777 } 00:07:32.777 } 00:07:32.777 }' 00:07:32.777 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.777 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.777 pt2' 00:07:32.777 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:33.037 [2024-12-12 19:36:15.738861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f53a777a-e09f-4a49-96e7-b273b1f6a123 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f53a777a-e09f-4a49-96e7-b273b1f6a123 ']' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 [2024-12-12 19:36:15.786455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.037 [2024-12-12 19:36:15.786519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.037 [2024-12-12 19:36:15.786615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.037 [2024-12-12 19:36:15.786682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.037 [2024-12-12 19:36:15.786695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.037 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.297 [2024-12-12 19:36:15.922281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:33.297 [2024-12-12 19:36:15.924155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:33.297 [2024-12-12 19:36:15.924224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:33.297 [2024-12-12 19:36:15.924280] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:33.297 [2024-12-12 19:36:15.924295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.297 [2024-12-12 19:36:15.924305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:33.297 request: 00:07:33.297 { 00:07:33.297 "name": "raid_bdev1", 00:07:33.297 "raid_level": "concat", 00:07:33.297 "base_bdevs": [ 00:07:33.297 "malloc1", 00:07:33.297 "malloc2" 00:07:33.297 ], 00:07:33.297 "strip_size_kb": 64, 00:07:33.297 "superblock": false, 00:07:33.297 "method": "bdev_raid_create", 00:07:33.297 "req_id": 1 00:07:33.297 } 00:07:33.297 Got JSON-RPC error response 00:07:33.297 response: 00:07:33.297 { 00:07:33.297 "code": -17, 00:07:33.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:33.297 } 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.297 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.298 [2024-12-12 19:36:15.986174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.298 [2024-12-12 19:36:15.986323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.298 [2024-12-12 19:36:15.986362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:33.298 [2024-12-12 19:36:15.986400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.298 [2024-12-12 19:36:15.988879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.298 [2024-12-12 19:36:15.988986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.298 [2024-12-12 19:36:15.989117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:33.298 [2024-12-12 19:36:15.989259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.298 pt1 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.298 19:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.298 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.298 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.298 "name": "raid_bdev1", 00:07:33.298 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:33.298 "strip_size_kb": 64, 00:07:33.298 "state": "configuring", 00:07:33.298 "raid_level": "concat", 00:07:33.298 "superblock": true, 00:07:33.298 "num_base_bdevs": 2, 00:07:33.298 "num_base_bdevs_discovered": 1, 00:07:33.298 "num_base_bdevs_operational": 2, 00:07:33.298 "base_bdevs_list": [ 00:07:33.298 { 00:07:33.298 "name": "pt1", 00:07:33.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.298 "is_configured": true, 00:07:33.298 "data_offset": 2048, 00:07:33.298 "data_size": 63488 00:07:33.298 }, 00:07:33.298 { 00:07:33.298 "name": null, 00:07:33.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.298 "is_configured": false, 00:07:33.298 "data_offset": 2048, 00:07:33.298 "data_size": 63488 00:07:33.298 } 00:07:33.298 ] 00:07:33.298 }' 00:07:33.298 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.298 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.866 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.866 [2024-12-12 19:36:16.441417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.866 [2024-12-12 19:36:16.441560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.866 [2024-12-12 19:36:16.441606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.866 [2024-12-12 19:36:16.441654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.866 [2024-12-12 19:36:16.442231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.867 [2024-12-12 19:36:16.442313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.867 [2024-12-12 19:36:16.442456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.867 pt2 00:07:33.867 [2024-12-12 19:36:16.442523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.867 [2024-12-12 19:36:16.442712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.867 [2024-12-12 19:36:16.442726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.867 [2024-12-12 19:36:16.442992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:33.867 [2024-12-12 19:36:16.443147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.867 [2024-12-12 19:36:16.443156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:33.867 [2024-12-12 19:36:16.443312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.867 "name": "raid_bdev1", 00:07:33.867 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:33.867 "strip_size_kb": 64, 00:07:33.867 "state": "online", 00:07:33.867 "raid_level": "concat", 00:07:33.867 "superblock": true, 00:07:33.867 "num_base_bdevs": 2, 00:07:33.867 "num_base_bdevs_discovered": 2, 00:07:33.867 "num_base_bdevs_operational": 2, 00:07:33.867 "base_bdevs_list": [ 00:07:33.867 { 00:07:33.867 "name": "pt1", 00:07:33.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.867 "is_configured": true, 00:07:33.867 "data_offset": 2048, 00:07:33.867 "data_size": 63488 00:07:33.867 }, 00:07:33.867 { 00:07:33.867 "name": "pt2", 00:07:33.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.867 "is_configured": true, 00:07:33.867 "data_offset": 2048, 00:07:33.867 "data_size": 63488 00:07:33.867 } 00:07:33.867 ] 00:07:33.867 }' 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.867 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.126 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:34.126 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:34.126 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.126 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.126 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.127 [2024-12-12 19:36:16.892943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.127 "name": "raid_bdev1", 00:07:34.127 "aliases": [ 00:07:34.127 "f53a777a-e09f-4a49-96e7-b273b1f6a123" 00:07:34.127 ], 00:07:34.127 "product_name": "Raid Volume", 00:07:34.127 "block_size": 512, 00:07:34.127 "num_blocks": 126976, 00:07:34.127 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:34.127 "assigned_rate_limits": { 00:07:34.127 "rw_ios_per_sec": 0, 00:07:34.127 "rw_mbytes_per_sec": 0, 00:07:34.127 "r_mbytes_per_sec": 0, 00:07:34.127 "w_mbytes_per_sec": 0 00:07:34.127 }, 00:07:34.127 "claimed": false, 00:07:34.127 "zoned": false, 00:07:34.127 "supported_io_types": { 00:07:34.127 "read": true, 00:07:34.127 "write": true, 00:07:34.127 "unmap": true, 00:07:34.127 "flush": true, 00:07:34.127 "reset": true, 00:07:34.127 "nvme_admin": false, 00:07:34.127 "nvme_io": false, 00:07:34.127 "nvme_io_md": false, 00:07:34.127 "write_zeroes": true, 00:07:34.127 "zcopy": false, 00:07:34.127 "get_zone_info": false, 00:07:34.127 "zone_management": false, 00:07:34.127 "zone_append": false, 00:07:34.127 "compare": false, 00:07:34.127 "compare_and_write": false, 00:07:34.127 "abort": false, 00:07:34.127 "seek_hole": false, 00:07:34.127 "seek_data": false, 00:07:34.127 "copy": false, 00:07:34.127 "nvme_iov_md": false 00:07:34.127 }, 00:07:34.127 "memory_domains": [ 00:07:34.127 { 00:07:34.127 "dma_device_id": "system", 00:07:34.127 "dma_device_type": 1 00:07:34.127 }, 00:07:34.127 { 00:07:34.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.127 "dma_device_type": 2 00:07:34.127 }, 00:07:34.127 { 00:07:34.127 "dma_device_id": "system", 00:07:34.127 "dma_device_type": 1 00:07:34.127 }, 00:07:34.127 { 00:07:34.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.127 "dma_device_type": 2 00:07:34.127 } 00:07:34.127 ], 00:07:34.127 "driver_specific": { 00:07:34.127 "raid": { 00:07:34.127 "uuid": "f53a777a-e09f-4a49-96e7-b273b1f6a123", 00:07:34.127 "strip_size_kb": 64, 00:07:34.127 "state": "online", 00:07:34.127 "raid_level": "concat", 00:07:34.127 "superblock": true, 00:07:34.127 "num_base_bdevs": 2, 00:07:34.127 "num_base_bdevs_discovered": 2, 00:07:34.127 "num_base_bdevs_operational": 2, 00:07:34.127 "base_bdevs_list": [ 00:07:34.127 { 00:07:34.127 "name": "pt1", 00:07:34.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.127 "is_configured": true, 00:07:34.127 "data_offset": 2048, 00:07:34.127 "data_size": 63488 00:07:34.127 }, 00:07:34.127 { 00:07:34.127 "name": "pt2", 00:07:34.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.127 "is_configured": true, 00:07:34.127 "data_offset": 2048, 00:07:34.127 "data_size": 63488 00:07:34.127 } 00:07:34.127 ] 00:07:34.127 } 00:07:34.127 } 00:07:34.127 }' 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:34.127 pt2' 00:07:34.127 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.387 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.387 19:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.387 [2024-12-12 19:36:17.112530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f53a777a-e09f-4a49-96e7-b273b1f6a123 '!=' f53a777a-e09f-4a49-96e7-b273b1f6a123 ']' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63902 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63902 ']' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63902 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63902 00:07:34.387 killing process with pid 63902 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63902' 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63902 00:07:34.387 [2024-12-12 19:36:17.179921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.387 [2024-12-12 19:36:17.180019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.387 [2024-12-12 19:36:17.180068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.387 [2024-12-12 19:36:17.180080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:34.387 19:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63902 00:07:34.647 [2024-12-12 19:36:17.376799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.028 19:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:36.028 00:07:36.028 real 0m4.416s 00:07:36.028 user 0m6.211s 00:07:36.028 sys 0m0.755s 00:07:36.028 19:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.028 19:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.028 ************************************ 00:07:36.028 END TEST raid_superblock_test 00:07:36.028 ************************************ 00:07:36.028 19:36:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:36.028 19:36:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.028 19:36:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.028 19:36:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.028 ************************************ 00:07:36.028 START TEST raid_read_error_test 00:07:36.028 ************************************ 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jHPcsHygqG 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64114 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64114 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64114 ']' 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.028 19:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.028 [2024-12-12 19:36:18.632618] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:36.028 [2024-12-12 19:36:18.632804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64114 ] 00:07:36.028 [2024-12-12 19:36:18.805251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.288 [2024-12-12 19:36:18.920532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.288 [2024-12-12 19:36:19.110116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.288 [2024-12-12 19:36:19.110248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.858 BaseBdev1_malloc 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.858 true 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.858 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.858 [2024-12-12 19:36:19.505059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:36.858 [2024-12-12 19:36:19.505112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.858 [2024-12-12 19:36:19.505131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:36.858 [2024-12-12 19:36:19.505141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.859 [2024-12-12 19:36:19.507167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.859 [2024-12-12 19:36:19.507207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:36.859 BaseBdev1 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 BaseBdev2_malloc 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 true 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 [2024-12-12 19:36:19.572652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:36.859 [2024-12-12 19:36:19.572704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.859 [2024-12-12 19:36:19.572735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:36.859 [2024-12-12 19:36:19.572744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.859 [2024-12-12 19:36:19.574737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.859 [2024-12-12 19:36:19.574775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:36.859 BaseBdev2 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 [2024-12-12 19:36:19.584693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.859 [2024-12-12 19:36:19.586466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.859 [2024-12-12 19:36:19.586657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.859 [2024-12-12 19:36:19.586673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.859 [2024-12-12 19:36:19.586903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:36.859 [2024-12-12 19:36:19.587075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.859 [2024-12-12 19:36:19.587087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:36.859 [2024-12-12 19:36:19.587232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.859 "name": "raid_bdev1", 00:07:36.859 "uuid": "89c3cd46-a15f-4c28-bf95-6ef345f961eb", 00:07:36.859 "strip_size_kb": 64, 00:07:36.859 "state": "online", 00:07:36.859 "raid_level": "concat", 00:07:36.859 "superblock": true, 00:07:36.859 "num_base_bdevs": 2, 00:07:36.859 "num_base_bdevs_discovered": 2, 00:07:36.859 "num_base_bdevs_operational": 2, 00:07:36.859 "base_bdevs_list": [ 00:07:36.859 { 00:07:36.859 "name": "BaseBdev1", 00:07:36.859 "uuid": "ac11dd5c-58da-586c-9bbc-b5978cdbc3af", 00:07:36.859 "is_configured": true, 00:07:36.859 "data_offset": 2048, 00:07:36.859 "data_size": 63488 00:07:36.859 }, 00:07:36.859 { 00:07:36.859 "name": "BaseBdev2", 00:07:36.859 "uuid": "645ff0ad-56c4-541b-81ea-e08ace7393f0", 00:07:36.859 "is_configured": true, 00:07:36.859 "data_offset": 2048, 00:07:36.859 "data_size": 63488 00:07:36.859 } 00:07:36.859 ] 00:07:36.859 }' 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.859 19:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.429 19:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:37.429 19:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:37.429 [2024-12-12 19:36:20.160999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.370 "name": "raid_bdev1", 00:07:38.370 "uuid": "89c3cd46-a15f-4c28-bf95-6ef345f961eb", 00:07:38.370 "strip_size_kb": 64, 00:07:38.370 "state": "online", 00:07:38.370 "raid_level": "concat", 00:07:38.370 "superblock": true, 00:07:38.370 "num_base_bdevs": 2, 00:07:38.370 "num_base_bdevs_discovered": 2, 00:07:38.370 "num_base_bdevs_operational": 2, 00:07:38.370 "base_bdevs_list": [ 00:07:38.370 { 00:07:38.370 "name": "BaseBdev1", 00:07:38.370 "uuid": "ac11dd5c-58da-586c-9bbc-b5978cdbc3af", 00:07:38.370 "is_configured": true, 00:07:38.370 "data_offset": 2048, 00:07:38.370 "data_size": 63488 00:07:38.370 }, 00:07:38.370 { 00:07:38.370 "name": "BaseBdev2", 00:07:38.370 "uuid": "645ff0ad-56c4-541b-81ea-e08ace7393f0", 00:07:38.370 "is_configured": true, 00:07:38.370 "data_offset": 2048, 00:07:38.370 "data_size": 63488 00:07:38.370 } 00:07:38.370 ] 00:07:38.370 }' 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.370 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.939 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.939 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.939 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.939 [2024-12-12 19:36:21.515348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.939 [2024-12-12 19:36:21.515386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.939 [2024-12-12 19:36:21.518058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.939 [2024-12-12 19:36:21.518100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.939 [2024-12-12 19:36:21.518130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.939 [2024-12-12 19:36:21.518144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:38.939 { 00:07:38.939 "results": [ 00:07:38.939 { 00:07:38.939 "job": "raid_bdev1", 00:07:38.939 "core_mask": "0x1", 00:07:38.939 "workload": "randrw", 00:07:38.939 "percentage": 50, 00:07:38.939 "status": "finished", 00:07:38.939 "queue_depth": 1, 00:07:38.939 "io_size": 131072, 00:07:38.939 "runtime": 1.355261, 00:07:38.939 "iops": 16144.491725210126, 00:07:38.939 "mibps": 2018.0614656512657, 00:07:38.939 "io_failed": 1, 00:07:38.939 "io_timeout": 0, 00:07:38.939 "avg_latency_us": 85.62443403171861, 00:07:38.940 "min_latency_us": 25.152838427947597, 00:07:38.940 "max_latency_us": 1366.5257641921398 00:07:38.940 } 00:07:38.940 ], 00:07:38.940 "core_count": 1 00:07:38.940 } 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64114 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64114 ']' 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64114 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64114 00:07:38.940 killing process with pid 64114 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64114' 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64114 00:07:38.940 [2024-12-12 19:36:21.557872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.940 19:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64114 00:07:38.940 [2024-12-12 19:36:21.690349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jHPcsHygqG 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:40.321 ************************************ 00:07:40.321 END TEST raid_read_error_test 00:07:40.321 ************************************ 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:40.321 00:07:40.321 real 0m4.305s 00:07:40.321 user 0m5.179s 00:07:40.321 sys 0m0.519s 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.321 19:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.321 19:36:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:40.321 19:36:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.321 19:36:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.321 19:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.321 ************************************ 00:07:40.321 START TEST raid_write_error_test 00:07:40.321 ************************************ 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kx1mFEMGxN 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64254 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64254 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64254 ']' 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.321 19:36:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.321 [2024-12-12 19:36:23.013374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:40.321 [2024-12-12 19:36:23.013574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64254 ] 00:07:40.581 [2024-12-12 19:36:23.185019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.581 [2024-12-12 19:36:23.299131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.840 [2024-12-12 19:36:23.492681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.840 [2024-12-12 19:36:23.492813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.100 BaseBdev1_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.100 true 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.100 [2024-12-12 19:36:23.888202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:41.100 [2024-12-12 19:36:23.888256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.100 [2024-12-12 19:36:23.888275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:41.100 [2024-12-12 19:36:23.888285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.100 [2024-12-12 19:36:23.890369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.100 BaseBdev1 00:07:41.100 [2024-12-12 19:36:23.890461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.100 BaseBdev2_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.100 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.359 true 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.359 [2024-12-12 19:36:23.954846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:41.359 [2024-12-12 19:36:23.954898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.359 [2024-12-12 19:36:23.954932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:41.359 [2024-12-12 19:36:23.954941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.359 [2024-12-12 19:36:23.956979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.359 [2024-12-12 19:36:23.957017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:41.359 BaseBdev2 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.359 [2024-12-12 19:36:23.966871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.359 [2024-12-12 19:36:23.968562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.359 [2024-12-12 19:36:23.968734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.359 [2024-12-12 19:36:23.968748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.359 [2024-12-12 19:36:23.968958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:41.359 [2024-12-12 19:36:23.969124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.359 [2024-12-12 19:36:23.969136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:41.359 [2024-12-12 19:36:23.969280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.359 19:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.359 19:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.359 "name": "raid_bdev1", 00:07:41.359 "uuid": "a9795018-3107-47b1-8746-7b7116411bc0", 00:07:41.359 "strip_size_kb": 64, 00:07:41.359 "state": "online", 00:07:41.359 "raid_level": "concat", 00:07:41.359 "superblock": true, 00:07:41.359 "num_base_bdevs": 2, 00:07:41.359 "num_base_bdevs_discovered": 2, 00:07:41.359 "num_base_bdevs_operational": 2, 00:07:41.359 "base_bdevs_list": [ 00:07:41.359 { 00:07:41.359 "name": "BaseBdev1", 00:07:41.359 "uuid": "1b1ba7fe-f944-5211-8614-611fb23c95ab", 00:07:41.359 "is_configured": true, 00:07:41.359 "data_offset": 2048, 00:07:41.359 "data_size": 63488 00:07:41.359 }, 00:07:41.359 { 00:07:41.359 "name": "BaseBdev2", 00:07:41.359 "uuid": "2931177b-28dc-5ede-8ad8-0090900d363c", 00:07:41.359 "is_configured": true, 00:07:41.359 "data_offset": 2048, 00:07:41.359 "data_size": 63488 00:07:41.359 } 00:07:41.359 ] 00:07:41.359 }' 00:07:41.359 19:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.359 19:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.618 19:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:41.618 19:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:41.878 [2024-12-12 19:36:24.479273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.821 "name": "raid_bdev1", 00:07:42.821 "uuid": "a9795018-3107-47b1-8746-7b7116411bc0", 00:07:42.821 "strip_size_kb": 64, 00:07:42.821 "state": "online", 00:07:42.821 "raid_level": "concat", 00:07:42.821 "superblock": true, 00:07:42.821 "num_base_bdevs": 2, 00:07:42.821 "num_base_bdevs_discovered": 2, 00:07:42.821 "num_base_bdevs_operational": 2, 00:07:42.821 "base_bdevs_list": [ 00:07:42.821 { 00:07:42.821 "name": "BaseBdev1", 00:07:42.821 "uuid": "1b1ba7fe-f944-5211-8614-611fb23c95ab", 00:07:42.821 "is_configured": true, 00:07:42.821 "data_offset": 2048, 00:07:42.821 "data_size": 63488 00:07:42.821 }, 00:07:42.821 { 00:07:42.821 "name": "BaseBdev2", 00:07:42.821 "uuid": "2931177b-28dc-5ede-8ad8-0090900d363c", 00:07:42.821 "is_configured": true, 00:07:42.821 "data_offset": 2048, 00:07:42.821 "data_size": 63488 00:07:42.821 } 00:07:42.821 ] 00:07:42.821 }' 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.821 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 [2024-12-12 19:36:25.818896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.086 [2024-12-12 19:36:25.818991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.086 [2024-12-12 19:36:25.821605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.086 [2024-12-12 19:36:25.821643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.086 [2024-12-12 19:36:25.821710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.086 [2024-12-12 19:36:25.821728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.086 { 00:07:43.086 "results": [ 00:07:43.086 { 00:07:43.086 "job": "raid_bdev1", 00:07:43.086 "core_mask": "0x1", 00:07:43.086 "workload": "randrw", 00:07:43.086 "percentage": 50, 00:07:43.086 "status": "finished", 00:07:43.086 "queue_depth": 1, 00:07:43.086 "io_size": 131072, 00:07:43.086 "runtime": 1.340508, 00:07:43.086 "iops": 16121.500207384066, 00:07:43.086 "mibps": 2015.1875259230083, 00:07:43.086 "io_failed": 1, 00:07:43.086 "io_timeout": 0, 00:07:43.086 "avg_latency_us": 85.81325044229835, 00:07:43.086 "min_latency_us": 25.2646288209607, 00:07:43.086 "max_latency_us": 1430.9170305676855 00:07:43.086 } 00:07:43.086 ], 00:07:43.086 "core_count": 1 00:07:43.086 } 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64254 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64254 ']' 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64254 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64254 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64254' 00:07:43.086 killing process with pid 64254 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64254 00:07:43.086 [2024-12-12 19:36:25.871762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.086 19:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64254 00:07:43.353 [2024-12-12 19:36:26.005410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.292 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kx1mFEMGxN 00:07:44.292 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:44.292 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:44.551 ************************************ 00:07:44.551 END TEST raid_write_error_test 00:07:44.551 ************************************ 00:07:44.551 00:07:44.551 real 0m4.237s 00:07:44.551 user 0m5.036s 00:07:44.551 sys 0m0.549s 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.551 19:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 19:36:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:44.552 19:36:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:44.552 19:36:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:44.552 19:36:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.552 19:36:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 ************************************ 00:07:44.552 START TEST raid_state_function_test 00:07:44.552 ************************************ 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64392 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64392' 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.552 Process raid pid: 64392 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64392 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64392 ']' 00:07:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.552 19:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 [2024-12-12 19:36:27.320808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:44.552 [2024-12-12 19:36:27.321076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.811 [2024-12-12 19:36:27.501869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.811 [2024-12-12 19:36:27.617021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.071 [2024-12-12 19:36:27.818465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.071 [2024-12-12 19:36:27.818499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.330 [2024-12-12 19:36:28.157237] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.330 [2024-12-12 19:36:28.157382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.330 [2024-12-12 19:36:28.157398] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.330 [2024-12-12 19:36:28.157408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.330 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.331 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.590 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.590 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.590 "name": "Existed_Raid", 00:07:45.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.590 "strip_size_kb": 0, 00:07:45.590 "state": "configuring", 00:07:45.590 "raid_level": "raid1", 00:07:45.590 "superblock": false, 00:07:45.590 "num_base_bdevs": 2, 00:07:45.590 "num_base_bdevs_discovered": 0, 00:07:45.590 "num_base_bdevs_operational": 2, 00:07:45.590 "base_bdevs_list": [ 00:07:45.590 { 00:07:45.590 "name": "BaseBdev1", 00:07:45.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.590 "is_configured": false, 00:07:45.590 "data_offset": 0, 00:07:45.590 "data_size": 0 00:07:45.590 }, 00:07:45.590 { 00:07:45.590 "name": "BaseBdev2", 00:07:45.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.590 "is_configured": false, 00:07:45.590 "data_offset": 0, 00:07:45.590 "data_size": 0 00:07:45.590 } 00:07:45.590 ] 00:07:45.590 }' 00:07:45.590 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.590 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 [2024-12-12 19:36:28.608401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.849 [2024-12-12 19:36:28.608479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 [2024-12-12 19:36:28.616373] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.849 [2024-12-12 19:36:28.616450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.849 [2024-12-12 19:36:28.616478] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.849 [2024-12-12 19:36:28.616502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 [2024-12-12 19:36:28.658158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.849 BaseBdev1 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.849 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.849 [ 00:07:45.849 { 00:07:45.849 "name": "BaseBdev1", 00:07:45.849 "aliases": [ 00:07:45.849 "8485c586-286b-4707-aa99-cf272dd55d9a" 00:07:45.849 ], 00:07:45.849 "product_name": "Malloc disk", 00:07:45.849 "block_size": 512, 00:07:45.849 "num_blocks": 65536, 00:07:45.849 "uuid": "8485c586-286b-4707-aa99-cf272dd55d9a", 00:07:45.849 "assigned_rate_limits": { 00:07:45.849 "rw_ios_per_sec": 0, 00:07:45.849 "rw_mbytes_per_sec": 0, 00:07:45.849 "r_mbytes_per_sec": 0, 00:07:45.849 "w_mbytes_per_sec": 0 00:07:45.849 }, 00:07:45.849 "claimed": true, 00:07:45.849 "claim_type": "exclusive_write", 00:07:45.849 "zoned": false, 00:07:45.849 "supported_io_types": { 00:07:45.849 "read": true, 00:07:45.849 "write": true, 00:07:45.849 "unmap": true, 00:07:45.849 "flush": true, 00:07:45.849 "reset": true, 00:07:45.849 "nvme_admin": false, 00:07:45.849 "nvme_io": false, 00:07:45.849 "nvme_io_md": false, 00:07:45.849 "write_zeroes": true, 00:07:45.849 "zcopy": true, 00:07:45.849 "get_zone_info": false, 00:07:45.849 "zone_management": false, 00:07:45.849 "zone_append": false, 00:07:45.849 "compare": false, 00:07:45.849 "compare_and_write": false, 00:07:45.849 "abort": true, 00:07:45.849 "seek_hole": false, 00:07:46.109 "seek_data": false, 00:07:46.109 "copy": true, 00:07:46.109 "nvme_iov_md": false 00:07:46.109 }, 00:07:46.109 "memory_domains": [ 00:07:46.109 { 00:07:46.109 "dma_device_id": "system", 00:07:46.109 "dma_device_type": 1 00:07:46.109 }, 00:07:46.109 { 00:07:46.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.109 "dma_device_type": 2 00:07:46.109 } 00:07:46.109 ], 00:07:46.109 "driver_specific": {} 00:07:46.109 } 00:07:46.109 ] 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.109 "name": "Existed_Raid", 00:07:46.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.109 "strip_size_kb": 0, 00:07:46.109 "state": "configuring", 00:07:46.109 "raid_level": "raid1", 00:07:46.109 "superblock": false, 00:07:46.109 "num_base_bdevs": 2, 00:07:46.109 "num_base_bdevs_discovered": 1, 00:07:46.109 "num_base_bdevs_operational": 2, 00:07:46.109 "base_bdevs_list": [ 00:07:46.109 { 00:07:46.109 "name": "BaseBdev1", 00:07:46.109 "uuid": "8485c586-286b-4707-aa99-cf272dd55d9a", 00:07:46.109 "is_configured": true, 00:07:46.109 "data_offset": 0, 00:07:46.109 "data_size": 65536 00:07:46.109 }, 00:07:46.109 { 00:07:46.109 "name": "BaseBdev2", 00:07:46.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.109 "is_configured": false, 00:07:46.109 "data_offset": 0, 00:07:46.109 "data_size": 0 00:07:46.109 } 00:07:46.109 ] 00:07:46.109 }' 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.109 19:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.369 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.369 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.369 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.369 [2024-12-12 19:36:29.105459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.369 [2024-12-12 19:36:29.105589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:46.369 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.370 [2024-12-12 19:36:29.117458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.370 [2024-12-12 19:36:29.119295] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.370 [2024-12-12 19:36:29.119339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.370 "name": "Existed_Raid", 00:07:46.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.370 "strip_size_kb": 0, 00:07:46.370 "state": "configuring", 00:07:46.370 "raid_level": "raid1", 00:07:46.370 "superblock": false, 00:07:46.370 "num_base_bdevs": 2, 00:07:46.370 "num_base_bdevs_discovered": 1, 00:07:46.370 "num_base_bdevs_operational": 2, 00:07:46.370 "base_bdevs_list": [ 00:07:46.370 { 00:07:46.370 "name": "BaseBdev1", 00:07:46.370 "uuid": "8485c586-286b-4707-aa99-cf272dd55d9a", 00:07:46.370 "is_configured": true, 00:07:46.370 "data_offset": 0, 00:07:46.370 "data_size": 65536 00:07:46.370 }, 00:07:46.370 { 00:07:46.370 "name": "BaseBdev2", 00:07:46.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.370 "is_configured": false, 00:07:46.370 "data_offset": 0, 00:07:46.370 "data_size": 0 00:07:46.370 } 00:07:46.370 ] 00:07:46.370 }' 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.370 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.938 [2024-12-12 19:36:29.649999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.938 [2024-12-12 19:36:29.650108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.938 [2024-12-12 19:36:29.650132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.938 [2024-12-12 19:36:29.650444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.938 [2024-12-12 19:36:29.650699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.938 [2024-12-12 19:36:29.650749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:46.938 [2024-12-12 19:36:29.651066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.938 BaseBdev2 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.938 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.939 [ 00:07:46.939 { 00:07:46.939 "name": "BaseBdev2", 00:07:46.939 "aliases": [ 00:07:46.939 "846e5a0f-06b7-46a8-ac2a-96f819de4733" 00:07:46.939 ], 00:07:46.939 "product_name": "Malloc disk", 00:07:46.939 "block_size": 512, 00:07:46.939 "num_blocks": 65536, 00:07:46.939 "uuid": "846e5a0f-06b7-46a8-ac2a-96f819de4733", 00:07:46.939 "assigned_rate_limits": { 00:07:46.939 "rw_ios_per_sec": 0, 00:07:46.939 "rw_mbytes_per_sec": 0, 00:07:46.939 "r_mbytes_per_sec": 0, 00:07:46.939 "w_mbytes_per_sec": 0 00:07:46.939 }, 00:07:46.939 "claimed": true, 00:07:46.939 "claim_type": "exclusive_write", 00:07:46.939 "zoned": false, 00:07:46.939 "supported_io_types": { 00:07:46.939 "read": true, 00:07:46.939 "write": true, 00:07:46.939 "unmap": true, 00:07:46.939 "flush": true, 00:07:46.939 "reset": true, 00:07:46.939 "nvme_admin": false, 00:07:46.939 "nvme_io": false, 00:07:46.939 "nvme_io_md": false, 00:07:46.939 "write_zeroes": true, 00:07:46.939 "zcopy": true, 00:07:46.939 "get_zone_info": false, 00:07:46.939 "zone_management": false, 00:07:46.939 "zone_append": false, 00:07:46.939 "compare": false, 00:07:46.939 "compare_and_write": false, 00:07:46.939 "abort": true, 00:07:46.939 "seek_hole": false, 00:07:46.939 "seek_data": false, 00:07:46.939 "copy": true, 00:07:46.939 "nvme_iov_md": false 00:07:46.939 }, 00:07:46.939 "memory_domains": [ 00:07:46.939 { 00:07:46.939 "dma_device_id": "system", 00:07:46.939 "dma_device_type": 1 00:07:46.939 }, 00:07:46.939 { 00:07:46.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.939 "dma_device_type": 2 00:07:46.939 } 00:07:46.939 ], 00:07:46.939 "driver_specific": {} 00:07:46.939 } 00:07:46.939 ] 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.939 "name": "Existed_Raid", 00:07:46.939 "uuid": "8ad8fdd2-f6c0-4ed9-bc9c-ebb982a3b784", 00:07:46.939 "strip_size_kb": 0, 00:07:46.939 "state": "online", 00:07:46.939 "raid_level": "raid1", 00:07:46.939 "superblock": false, 00:07:46.939 "num_base_bdevs": 2, 00:07:46.939 "num_base_bdevs_discovered": 2, 00:07:46.939 "num_base_bdevs_operational": 2, 00:07:46.939 "base_bdevs_list": [ 00:07:46.939 { 00:07:46.939 "name": "BaseBdev1", 00:07:46.939 "uuid": "8485c586-286b-4707-aa99-cf272dd55d9a", 00:07:46.939 "is_configured": true, 00:07:46.939 "data_offset": 0, 00:07:46.939 "data_size": 65536 00:07:46.939 }, 00:07:46.939 { 00:07:46.939 "name": "BaseBdev2", 00:07:46.939 "uuid": "846e5a0f-06b7-46a8-ac2a-96f819de4733", 00:07:46.939 "is_configured": true, 00:07:46.939 "data_offset": 0, 00:07:46.939 "data_size": 65536 00:07:46.939 } 00:07:46.939 ] 00:07:46.939 }' 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.939 19:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.508 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.509 [2024-12-12 19:36:30.157617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.509 "name": "Existed_Raid", 00:07:47.509 "aliases": [ 00:07:47.509 "8ad8fdd2-f6c0-4ed9-bc9c-ebb982a3b784" 00:07:47.509 ], 00:07:47.509 "product_name": "Raid Volume", 00:07:47.509 "block_size": 512, 00:07:47.509 "num_blocks": 65536, 00:07:47.509 "uuid": "8ad8fdd2-f6c0-4ed9-bc9c-ebb982a3b784", 00:07:47.509 "assigned_rate_limits": { 00:07:47.509 "rw_ios_per_sec": 0, 00:07:47.509 "rw_mbytes_per_sec": 0, 00:07:47.509 "r_mbytes_per_sec": 0, 00:07:47.509 "w_mbytes_per_sec": 0 00:07:47.509 }, 00:07:47.509 "claimed": false, 00:07:47.509 "zoned": false, 00:07:47.509 "supported_io_types": { 00:07:47.509 "read": true, 00:07:47.509 "write": true, 00:07:47.509 "unmap": false, 00:07:47.509 "flush": false, 00:07:47.509 "reset": true, 00:07:47.509 "nvme_admin": false, 00:07:47.509 "nvme_io": false, 00:07:47.509 "nvme_io_md": false, 00:07:47.509 "write_zeroes": true, 00:07:47.509 "zcopy": false, 00:07:47.509 "get_zone_info": false, 00:07:47.509 "zone_management": false, 00:07:47.509 "zone_append": false, 00:07:47.509 "compare": false, 00:07:47.509 "compare_and_write": false, 00:07:47.509 "abort": false, 00:07:47.509 "seek_hole": false, 00:07:47.509 "seek_data": false, 00:07:47.509 "copy": false, 00:07:47.509 "nvme_iov_md": false 00:07:47.509 }, 00:07:47.509 "memory_domains": [ 00:07:47.509 { 00:07:47.509 "dma_device_id": "system", 00:07:47.509 "dma_device_type": 1 00:07:47.509 }, 00:07:47.509 { 00:07:47.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.509 "dma_device_type": 2 00:07:47.509 }, 00:07:47.509 { 00:07:47.509 "dma_device_id": "system", 00:07:47.509 "dma_device_type": 1 00:07:47.509 }, 00:07:47.509 { 00:07:47.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.509 "dma_device_type": 2 00:07:47.509 } 00:07:47.509 ], 00:07:47.509 "driver_specific": { 00:07:47.509 "raid": { 00:07:47.509 "uuid": "8ad8fdd2-f6c0-4ed9-bc9c-ebb982a3b784", 00:07:47.509 "strip_size_kb": 0, 00:07:47.509 "state": "online", 00:07:47.509 "raid_level": "raid1", 00:07:47.509 "superblock": false, 00:07:47.509 "num_base_bdevs": 2, 00:07:47.509 "num_base_bdevs_discovered": 2, 00:07:47.509 "num_base_bdevs_operational": 2, 00:07:47.509 "base_bdevs_list": [ 00:07:47.509 { 00:07:47.509 "name": "BaseBdev1", 00:07:47.509 "uuid": "8485c586-286b-4707-aa99-cf272dd55d9a", 00:07:47.509 "is_configured": true, 00:07:47.509 "data_offset": 0, 00:07:47.509 "data_size": 65536 00:07:47.509 }, 00:07:47.509 { 00:07:47.509 "name": "BaseBdev2", 00:07:47.509 "uuid": "846e5a0f-06b7-46a8-ac2a-96f819de4733", 00:07:47.509 "is_configured": true, 00:07:47.509 "data_offset": 0, 00:07:47.509 "data_size": 65536 00:07:47.509 } 00:07:47.509 ] 00:07:47.509 } 00:07:47.509 } 00:07:47.509 }' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.509 BaseBdev2' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.509 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 [2024-12-12 19:36:30.369006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.769 "name": "Existed_Raid", 00:07:47.769 "uuid": "8ad8fdd2-f6c0-4ed9-bc9c-ebb982a3b784", 00:07:47.769 "strip_size_kb": 0, 00:07:47.769 "state": "online", 00:07:47.769 "raid_level": "raid1", 00:07:47.769 "superblock": false, 00:07:47.769 "num_base_bdevs": 2, 00:07:47.769 "num_base_bdevs_discovered": 1, 00:07:47.769 "num_base_bdevs_operational": 1, 00:07:47.769 "base_bdevs_list": [ 00:07:47.769 { 00:07:47.769 "name": null, 00:07:47.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.769 "is_configured": false, 00:07:47.769 "data_offset": 0, 00:07:47.769 "data_size": 65536 00:07:47.769 }, 00:07:47.769 { 00:07:47.769 "name": "BaseBdev2", 00:07:47.769 "uuid": "846e5a0f-06b7-46a8-ac2a-96f819de4733", 00:07:47.769 "is_configured": true, 00:07:47.769 "data_offset": 0, 00:07:47.769 "data_size": 65536 00:07:47.769 } 00:07:47.769 ] 00:07:47.769 }' 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.769 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.028 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.288 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.288 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.288 19:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.288 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.288 19:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.288 [2024-12-12 19:36:30.909413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.288 [2024-12-12 19:36:30.909586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.288 [2024-12-12 19:36:30.999787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.288 [2024-12-12 19:36:30.999912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.288 [2024-12-12 19:36:30.999929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64392 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64392 ']' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64392 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64392 00:07:48.288 killing process with pid 64392 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64392' 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64392 00:07:48.288 [2024-12-12 19:36:31.097939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.288 19:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64392 00:07:48.288 [2024-12-12 19:36:31.113943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.669 00:07:49.669 real 0m4.979s 00:07:49.669 user 0m7.185s 00:07:49.669 sys 0m0.843s 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.669 ************************************ 00:07:49.669 END TEST raid_state_function_test 00:07:49.669 ************************************ 00:07:49.669 19:36:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:49.669 19:36:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.669 19:36:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.669 19:36:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.669 ************************************ 00:07:49.669 START TEST raid_state_function_test_sb 00:07:49.669 ************************************ 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64645 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64645' 00:07:49.669 Process raid pid: 64645 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64645 00:07:49.669 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64645 ']' 00:07:49.670 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.670 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.670 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.670 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.670 19:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 [2024-12-12 19:36:32.360085] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.670 [2024-12-12 19:36:32.360286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.929 [2024-12-12 19:36:32.536965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.929 [2024-12-12 19:36:32.653198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.188 [2024-12-12 19:36:32.850216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.188 [2024-12-12 19:36:32.850303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.448 [2024-12-12 19:36:33.213458] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.448 [2024-12-12 19:36:33.213570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.448 [2024-12-12 19:36:33.213603] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.448 [2024-12-12 19:36:33.213625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.448 "name": "Existed_Raid", 00:07:50.448 "uuid": "e47e188e-10d2-46f2-ac85-41d536ae7d49", 00:07:50.448 "strip_size_kb": 0, 00:07:50.448 "state": "configuring", 00:07:50.448 "raid_level": "raid1", 00:07:50.448 "superblock": true, 00:07:50.448 "num_base_bdevs": 2, 00:07:50.448 "num_base_bdevs_discovered": 0, 00:07:50.448 "num_base_bdevs_operational": 2, 00:07:50.448 "base_bdevs_list": [ 00:07:50.448 { 00:07:50.448 "name": "BaseBdev1", 00:07:50.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.448 "is_configured": false, 00:07:50.448 "data_offset": 0, 00:07:50.448 "data_size": 0 00:07:50.448 }, 00:07:50.448 { 00:07:50.448 "name": "BaseBdev2", 00:07:50.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.448 "is_configured": false, 00:07:50.448 "data_offset": 0, 00:07:50.448 "data_size": 0 00:07:50.448 } 00:07:50.448 ] 00:07:50.448 }' 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.448 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.016 [2024-12-12 19:36:33.640669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.016 [2024-12-12 19:36:33.640703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.016 [2024-12-12 19:36:33.652650] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.016 [2024-12-12 19:36:33.652722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.016 [2024-12-12 19:36:33.652748] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.016 [2024-12-12 19:36:33.652773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.016 [2024-12-12 19:36:33.701771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.016 BaseBdev1 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.016 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.017 [ 00:07:51.017 { 00:07:51.017 "name": "BaseBdev1", 00:07:51.017 "aliases": [ 00:07:51.017 "89f167f1-2ded-46d7-a951-b3048d4a2904" 00:07:51.017 ], 00:07:51.017 "product_name": "Malloc disk", 00:07:51.017 "block_size": 512, 00:07:51.017 "num_blocks": 65536, 00:07:51.017 "uuid": "89f167f1-2ded-46d7-a951-b3048d4a2904", 00:07:51.017 "assigned_rate_limits": { 00:07:51.017 "rw_ios_per_sec": 0, 00:07:51.017 "rw_mbytes_per_sec": 0, 00:07:51.017 "r_mbytes_per_sec": 0, 00:07:51.017 "w_mbytes_per_sec": 0 00:07:51.017 }, 00:07:51.017 "claimed": true, 00:07:51.017 "claim_type": "exclusive_write", 00:07:51.017 "zoned": false, 00:07:51.017 "supported_io_types": { 00:07:51.017 "read": true, 00:07:51.017 "write": true, 00:07:51.017 "unmap": true, 00:07:51.017 "flush": true, 00:07:51.017 "reset": true, 00:07:51.017 "nvme_admin": false, 00:07:51.017 "nvme_io": false, 00:07:51.017 "nvme_io_md": false, 00:07:51.017 "write_zeroes": true, 00:07:51.017 "zcopy": true, 00:07:51.017 "get_zone_info": false, 00:07:51.017 "zone_management": false, 00:07:51.017 "zone_append": false, 00:07:51.017 "compare": false, 00:07:51.017 "compare_and_write": false, 00:07:51.017 "abort": true, 00:07:51.017 "seek_hole": false, 00:07:51.017 "seek_data": false, 00:07:51.017 "copy": true, 00:07:51.017 "nvme_iov_md": false 00:07:51.017 }, 00:07:51.017 "memory_domains": [ 00:07:51.017 { 00:07:51.017 "dma_device_id": "system", 00:07:51.017 "dma_device_type": 1 00:07:51.017 }, 00:07:51.017 { 00:07:51.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.017 "dma_device_type": 2 00:07:51.017 } 00:07:51.017 ], 00:07:51.017 "driver_specific": {} 00:07:51.017 } 00:07:51.017 ] 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.017 "name": "Existed_Raid", 00:07:51.017 "uuid": "0190c45a-1f54-4cba-890a-4fcb07518de9", 00:07:51.017 "strip_size_kb": 0, 00:07:51.017 "state": "configuring", 00:07:51.017 "raid_level": "raid1", 00:07:51.017 "superblock": true, 00:07:51.017 "num_base_bdevs": 2, 00:07:51.017 "num_base_bdevs_discovered": 1, 00:07:51.017 "num_base_bdevs_operational": 2, 00:07:51.017 "base_bdevs_list": [ 00:07:51.017 { 00:07:51.017 "name": "BaseBdev1", 00:07:51.017 "uuid": "89f167f1-2ded-46d7-a951-b3048d4a2904", 00:07:51.017 "is_configured": true, 00:07:51.017 "data_offset": 2048, 00:07:51.017 "data_size": 63488 00:07:51.017 }, 00:07:51.017 { 00:07:51.017 "name": "BaseBdev2", 00:07:51.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.017 "is_configured": false, 00:07:51.017 "data_offset": 0, 00:07:51.017 "data_size": 0 00:07:51.017 } 00:07:51.017 ] 00:07:51.017 }' 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.017 19:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.584 [2024-12-12 19:36:34.141069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.584 [2024-12-12 19:36:34.141124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.584 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.584 [2024-12-12 19:36:34.153081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.584 [2024-12-12 19:36:34.154920] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.584 [2024-12-12 19:36:34.155008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.585 "name": "Existed_Raid", 00:07:51.585 "uuid": "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6", 00:07:51.585 "strip_size_kb": 0, 00:07:51.585 "state": "configuring", 00:07:51.585 "raid_level": "raid1", 00:07:51.585 "superblock": true, 00:07:51.585 "num_base_bdevs": 2, 00:07:51.585 "num_base_bdevs_discovered": 1, 00:07:51.585 "num_base_bdevs_operational": 2, 00:07:51.585 "base_bdevs_list": [ 00:07:51.585 { 00:07:51.585 "name": "BaseBdev1", 00:07:51.585 "uuid": "89f167f1-2ded-46d7-a951-b3048d4a2904", 00:07:51.585 "is_configured": true, 00:07:51.585 "data_offset": 2048, 00:07:51.585 "data_size": 63488 00:07:51.585 }, 00:07:51.585 { 00:07:51.585 "name": "BaseBdev2", 00:07:51.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.585 "is_configured": false, 00:07:51.585 "data_offset": 0, 00:07:51.585 "data_size": 0 00:07:51.585 } 00:07:51.585 ] 00:07:51.585 }' 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.585 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 [2024-12-12 19:36:34.656893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.844 [2024-12-12 19:36:34.657180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.844 [2024-12-12 19:36:34.657196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.844 [2024-12-12 19:36:34.657462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:51.844 [2024-12-12 19:36:34.657655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.844 [2024-12-12 19:36:34.657671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:51.844 BaseBdev2 00:07:51.844 [2024-12-12 19:36:34.657808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.844 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.844 [ 00:07:51.844 { 00:07:51.844 "name": "BaseBdev2", 00:07:51.844 "aliases": [ 00:07:51.845 "0d411a87-04a6-4e22-9cba-622523180abb" 00:07:51.845 ], 00:07:51.845 "product_name": "Malloc disk", 00:07:51.845 "block_size": 512, 00:07:51.845 "num_blocks": 65536, 00:07:51.845 "uuid": "0d411a87-04a6-4e22-9cba-622523180abb", 00:07:51.845 "assigned_rate_limits": { 00:07:51.845 "rw_ios_per_sec": 0, 00:07:51.845 "rw_mbytes_per_sec": 0, 00:07:51.845 "r_mbytes_per_sec": 0, 00:07:51.845 "w_mbytes_per_sec": 0 00:07:51.845 }, 00:07:51.845 "claimed": true, 00:07:51.845 "claim_type": "exclusive_write", 00:07:51.845 "zoned": false, 00:07:51.845 "supported_io_types": { 00:07:51.845 "read": true, 00:07:51.845 "write": true, 00:07:51.845 "unmap": true, 00:07:51.845 "flush": true, 00:07:51.845 "reset": true, 00:07:51.845 "nvme_admin": false, 00:07:51.845 "nvme_io": false, 00:07:51.845 "nvme_io_md": false, 00:07:51.845 "write_zeroes": true, 00:07:52.105 "zcopy": true, 00:07:52.105 "get_zone_info": false, 00:07:52.105 "zone_management": false, 00:07:52.105 "zone_append": false, 00:07:52.105 "compare": false, 00:07:52.105 "compare_and_write": false, 00:07:52.105 "abort": true, 00:07:52.105 "seek_hole": false, 00:07:52.105 "seek_data": false, 00:07:52.105 "copy": true, 00:07:52.105 "nvme_iov_md": false 00:07:52.105 }, 00:07:52.105 "memory_domains": [ 00:07:52.105 { 00:07:52.105 "dma_device_id": "system", 00:07:52.105 "dma_device_type": 1 00:07:52.105 }, 00:07:52.105 { 00:07:52.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.105 "dma_device_type": 2 00:07:52.105 } 00:07:52.105 ], 00:07:52.105 "driver_specific": {} 00:07:52.105 } 00:07:52.105 ] 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.105 "name": "Existed_Raid", 00:07:52.105 "uuid": "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6", 00:07:52.105 "strip_size_kb": 0, 00:07:52.105 "state": "online", 00:07:52.105 "raid_level": "raid1", 00:07:52.105 "superblock": true, 00:07:52.105 "num_base_bdevs": 2, 00:07:52.105 "num_base_bdevs_discovered": 2, 00:07:52.105 "num_base_bdevs_operational": 2, 00:07:52.105 "base_bdevs_list": [ 00:07:52.105 { 00:07:52.105 "name": "BaseBdev1", 00:07:52.105 "uuid": "89f167f1-2ded-46d7-a951-b3048d4a2904", 00:07:52.105 "is_configured": true, 00:07:52.105 "data_offset": 2048, 00:07:52.105 "data_size": 63488 00:07:52.105 }, 00:07:52.105 { 00:07:52.105 "name": "BaseBdev2", 00:07:52.105 "uuid": "0d411a87-04a6-4e22-9cba-622523180abb", 00:07:52.105 "is_configured": true, 00:07:52.105 "data_offset": 2048, 00:07:52.105 "data_size": 63488 00:07:52.105 } 00:07:52.105 ] 00:07:52.105 }' 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.105 19:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.367 [2024-12-12 19:36:35.140451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.367 "name": "Existed_Raid", 00:07:52.367 "aliases": [ 00:07:52.367 "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6" 00:07:52.367 ], 00:07:52.367 "product_name": "Raid Volume", 00:07:52.367 "block_size": 512, 00:07:52.367 "num_blocks": 63488, 00:07:52.367 "uuid": "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6", 00:07:52.367 "assigned_rate_limits": { 00:07:52.367 "rw_ios_per_sec": 0, 00:07:52.367 "rw_mbytes_per_sec": 0, 00:07:52.367 "r_mbytes_per_sec": 0, 00:07:52.367 "w_mbytes_per_sec": 0 00:07:52.367 }, 00:07:52.367 "claimed": false, 00:07:52.367 "zoned": false, 00:07:52.367 "supported_io_types": { 00:07:52.367 "read": true, 00:07:52.367 "write": true, 00:07:52.367 "unmap": false, 00:07:52.367 "flush": false, 00:07:52.367 "reset": true, 00:07:52.367 "nvme_admin": false, 00:07:52.367 "nvme_io": false, 00:07:52.367 "nvme_io_md": false, 00:07:52.367 "write_zeroes": true, 00:07:52.367 "zcopy": false, 00:07:52.367 "get_zone_info": false, 00:07:52.367 "zone_management": false, 00:07:52.367 "zone_append": false, 00:07:52.367 "compare": false, 00:07:52.367 "compare_and_write": false, 00:07:52.367 "abort": false, 00:07:52.367 "seek_hole": false, 00:07:52.367 "seek_data": false, 00:07:52.367 "copy": false, 00:07:52.367 "nvme_iov_md": false 00:07:52.367 }, 00:07:52.367 "memory_domains": [ 00:07:52.367 { 00:07:52.367 "dma_device_id": "system", 00:07:52.367 "dma_device_type": 1 00:07:52.367 }, 00:07:52.367 { 00:07:52.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.367 "dma_device_type": 2 00:07:52.367 }, 00:07:52.367 { 00:07:52.367 "dma_device_id": "system", 00:07:52.367 "dma_device_type": 1 00:07:52.367 }, 00:07:52.367 { 00:07:52.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.367 "dma_device_type": 2 00:07:52.367 } 00:07:52.367 ], 00:07:52.367 "driver_specific": { 00:07:52.367 "raid": { 00:07:52.367 "uuid": "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6", 00:07:52.367 "strip_size_kb": 0, 00:07:52.367 "state": "online", 00:07:52.367 "raid_level": "raid1", 00:07:52.367 "superblock": true, 00:07:52.367 "num_base_bdevs": 2, 00:07:52.367 "num_base_bdevs_discovered": 2, 00:07:52.367 "num_base_bdevs_operational": 2, 00:07:52.367 "base_bdevs_list": [ 00:07:52.367 { 00:07:52.367 "name": "BaseBdev1", 00:07:52.367 "uuid": "89f167f1-2ded-46d7-a951-b3048d4a2904", 00:07:52.367 "is_configured": true, 00:07:52.367 "data_offset": 2048, 00:07:52.367 "data_size": 63488 00:07:52.367 }, 00:07:52.367 { 00:07:52.367 "name": "BaseBdev2", 00:07:52.367 "uuid": "0d411a87-04a6-4e22-9cba-622523180abb", 00:07:52.367 "is_configured": true, 00:07:52.367 "data_offset": 2048, 00:07:52.367 "data_size": 63488 00:07:52.367 } 00:07:52.367 ] 00:07:52.367 } 00:07:52.367 } 00:07:52.367 }' 00:07:52.367 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.627 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.628 BaseBdev2' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 [2024-12-12 19:36:35.339853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.887 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.887 "name": "Existed_Raid", 00:07:52.887 "uuid": "629c9d18-9bcf-4ebf-8c04-ffe702f9d7c6", 00:07:52.887 "strip_size_kb": 0, 00:07:52.887 "state": "online", 00:07:52.887 "raid_level": "raid1", 00:07:52.887 "superblock": true, 00:07:52.887 "num_base_bdevs": 2, 00:07:52.887 "num_base_bdevs_discovered": 1, 00:07:52.887 "num_base_bdevs_operational": 1, 00:07:52.887 "base_bdevs_list": [ 00:07:52.887 { 00:07:52.887 "name": null, 00:07:52.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.887 "is_configured": false, 00:07:52.887 "data_offset": 0, 00:07:52.887 "data_size": 63488 00:07:52.887 }, 00:07:52.887 { 00:07:52.887 "name": "BaseBdev2", 00:07:52.887 "uuid": "0d411a87-04a6-4e22-9cba-622523180abb", 00:07:52.887 "is_configured": true, 00:07:52.887 "data_offset": 2048, 00:07:52.887 "data_size": 63488 00:07:52.887 } 00:07:52.887 ] 00:07:52.887 }' 00:07:52.887 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.887 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.146 19:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.146 [2024-12-12 19:36:35.942889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.146 [2024-12-12 19:36:35.943058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.406 [2024-12-12 19:36:36.038193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.406 [2024-12-12 19:36:36.038242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.406 [2024-12-12 19:36:36.038256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64645 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64645 ']' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64645 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64645 00:07:53.406 killing process with pid 64645 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64645' 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64645 00:07:53.406 [2024-12-12 19:36:36.135472] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.406 19:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64645 00:07:53.406 [2024-12-12 19:36:36.152655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.786 ************************************ 00:07:54.786 END TEST raid_state_function_test_sb 00:07:54.786 ************************************ 00:07:54.786 19:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:54.786 00:07:54.786 real 0m5.051s 00:07:54.786 user 0m7.230s 00:07:54.786 sys 0m0.845s 00:07:54.786 19:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.786 19:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.786 19:36:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:54.786 19:36:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.786 19:36:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.786 19:36:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.786 ************************************ 00:07:54.786 START TEST raid_superblock_test 00:07:54.786 ************************************ 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64897 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64897 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64897 ']' 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.786 19:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.786 [2024-12-12 19:36:37.481557] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:54.786 [2024-12-12 19:36:37.481781] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64897 ] 00:07:55.045 [2024-12-12 19:36:37.662585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.045 [2024-12-12 19:36:37.792391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.303 [2024-12-12 19:36:38.035625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.303 [2024-12-12 19:36:38.035771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.562 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 malloc1 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 [2024-12-12 19:36:38.426430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.821 [2024-12-12 19:36:38.426551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.821 [2024-12-12 19:36:38.426597] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:55.821 [2024-12-12 19:36:38.426637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.821 [2024-12-12 19:36:38.429014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.821 [2024-12-12 19:36:38.429089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.821 pt1 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 malloc2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 [2024-12-12 19:36:38.485639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.821 [2024-12-12 19:36:38.485740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.821 [2024-12-12 19:36:38.485786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:55.821 [2024-12-12 19:36:38.485829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.821 [2024-12-12 19:36:38.488140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.821 [2024-12-12 19:36:38.488217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.821 pt2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 [2024-12-12 19:36:38.497668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.821 [2024-12-12 19:36:38.499696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.821 [2024-12-12 19:36:38.499930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.821 [2024-12-12 19:36:38.499986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.821 [2024-12-12 19:36:38.500304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.821 [2024-12-12 19:36:38.500522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.821 [2024-12-12 19:36:38.500597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:55.821 [2024-12-12 19:36:38.500834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.821 "name": "raid_bdev1", 00:07:55.821 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:55.821 "strip_size_kb": 0, 00:07:55.821 "state": "online", 00:07:55.821 "raid_level": "raid1", 00:07:55.821 "superblock": true, 00:07:55.821 "num_base_bdevs": 2, 00:07:55.821 "num_base_bdevs_discovered": 2, 00:07:55.821 "num_base_bdevs_operational": 2, 00:07:55.821 "base_bdevs_list": [ 00:07:55.821 { 00:07:55.821 "name": "pt1", 00:07:55.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.821 "is_configured": true, 00:07:55.821 "data_offset": 2048, 00:07:55.821 "data_size": 63488 00:07:55.821 }, 00:07:55.821 { 00:07:55.821 "name": "pt2", 00:07:55.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.821 "is_configured": true, 00:07:55.821 "data_offset": 2048, 00:07:55.821 "data_size": 63488 00:07:55.821 } 00:07:55.821 ] 00:07:55.821 }' 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.821 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.388 [2024-12-12 19:36:38.961298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.388 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.388 "name": "raid_bdev1", 00:07:56.388 "aliases": [ 00:07:56.388 "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f" 00:07:56.388 ], 00:07:56.388 "product_name": "Raid Volume", 00:07:56.388 "block_size": 512, 00:07:56.388 "num_blocks": 63488, 00:07:56.388 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:56.388 "assigned_rate_limits": { 00:07:56.388 "rw_ios_per_sec": 0, 00:07:56.388 "rw_mbytes_per_sec": 0, 00:07:56.388 "r_mbytes_per_sec": 0, 00:07:56.388 "w_mbytes_per_sec": 0 00:07:56.388 }, 00:07:56.388 "claimed": false, 00:07:56.388 "zoned": false, 00:07:56.388 "supported_io_types": { 00:07:56.388 "read": true, 00:07:56.388 "write": true, 00:07:56.388 "unmap": false, 00:07:56.388 "flush": false, 00:07:56.388 "reset": true, 00:07:56.388 "nvme_admin": false, 00:07:56.388 "nvme_io": false, 00:07:56.388 "nvme_io_md": false, 00:07:56.388 "write_zeroes": true, 00:07:56.388 "zcopy": false, 00:07:56.388 "get_zone_info": false, 00:07:56.388 "zone_management": false, 00:07:56.388 "zone_append": false, 00:07:56.389 "compare": false, 00:07:56.389 "compare_and_write": false, 00:07:56.389 "abort": false, 00:07:56.389 "seek_hole": false, 00:07:56.389 "seek_data": false, 00:07:56.389 "copy": false, 00:07:56.389 "nvme_iov_md": false 00:07:56.389 }, 00:07:56.389 "memory_domains": [ 00:07:56.389 { 00:07:56.389 "dma_device_id": "system", 00:07:56.389 "dma_device_type": 1 00:07:56.389 }, 00:07:56.389 { 00:07:56.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.389 "dma_device_type": 2 00:07:56.389 }, 00:07:56.389 { 00:07:56.389 "dma_device_id": "system", 00:07:56.389 "dma_device_type": 1 00:07:56.389 }, 00:07:56.389 { 00:07:56.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.389 "dma_device_type": 2 00:07:56.389 } 00:07:56.389 ], 00:07:56.389 "driver_specific": { 00:07:56.389 "raid": { 00:07:56.389 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:56.389 "strip_size_kb": 0, 00:07:56.389 "state": "online", 00:07:56.389 "raid_level": "raid1", 00:07:56.389 "superblock": true, 00:07:56.389 "num_base_bdevs": 2, 00:07:56.389 "num_base_bdevs_discovered": 2, 00:07:56.389 "num_base_bdevs_operational": 2, 00:07:56.389 "base_bdevs_list": [ 00:07:56.389 { 00:07:56.389 "name": "pt1", 00:07:56.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.389 "is_configured": true, 00:07:56.389 "data_offset": 2048, 00:07:56.389 "data_size": 63488 00:07:56.389 }, 00:07:56.389 { 00:07:56.389 "name": "pt2", 00:07:56.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.389 "is_configured": true, 00:07:56.389 "data_offset": 2048, 00:07:56.389 "data_size": 63488 00:07:56.389 } 00:07:56.389 ] 00:07:56.389 } 00:07:56.389 } 00:07:56.389 }' 00:07:56.389 19:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.389 pt2' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.389 [2024-12-12 19:36:39.196935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6f3fb14-9d3a-4022-8db3-c7beb5ef619f 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c6f3fb14-9d3a-4022-8db3-c7beb5ef619f ']' 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.389 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.389 [2024-12-12 19:36:39.228599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.389 [2024-12-12 19:36:39.228632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.389 [2024-12-12 19:36:39.228742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.389 [2024-12-12 19:36:39.228819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.389 [2024-12-12 19:36:39.228837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:56.648 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 [2024-12-12 19:36:39.364370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:56.649 [2024-12-12 19:36:39.366666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:56.649 [2024-12-12 19:36:39.366790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:56.649 [2024-12-12 19:36:39.366886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:56.649 [2024-12-12 19:36:39.366906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.649 [2024-12-12 19:36:39.366931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:56.649 request: 00:07:56.649 { 00:07:56.649 "name": "raid_bdev1", 00:07:56.649 "raid_level": "raid1", 00:07:56.649 "base_bdevs": [ 00:07:56.649 "malloc1", 00:07:56.649 "malloc2" 00:07:56.649 ], 00:07:56.649 "superblock": false, 00:07:56.649 "method": "bdev_raid_create", 00:07:56.649 "req_id": 1 00:07:56.649 } 00:07:56.649 Got JSON-RPC error response 00:07:56.649 response: 00:07:56.649 { 00:07:56.649 "code": -17, 00:07:56.649 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:56.649 } 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 [2024-12-12 19:36:39.428220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.649 [2024-12-12 19:36:39.428331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.649 [2024-12-12 19:36:39.428372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:56.649 [2024-12-12 19:36:39.428410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.649 [2024-12-12 19:36:39.430972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.649 [2024-12-12 19:36:39.431056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.649 [2024-12-12 19:36:39.431190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:56.649 [2024-12-12 19:36:39.431296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.649 pt1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.649 "name": "raid_bdev1", 00:07:56.649 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:56.649 "strip_size_kb": 0, 00:07:56.649 "state": "configuring", 00:07:56.649 "raid_level": "raid1", 00:07:56.649 "superblock": true, 00:07:56.649 "num_base_bdevs": 2, 00:07:56.649 "num_base_bdevs_discovered": 1, 00:07:56.649 "num_base_bdevs_operational": 2, 00:07:56.649 "base_bdevs_list": [ 00:07:56.649 { 00:07:56.649 "name": "pt1", 00:07:56.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.649 "is_configured": true, 00:07:56.649 "data_offset": 2048, 00:07:56.649 "data_size": 63488 00:07:56.649 }, 00:07:56.649 { 00:07:56.649 "name": null, 00:07:56.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.649 "is_configured": false, 00:07:56.649 "data_offset": 2048, 00:07:56.649 "data_size": 63488 00:07:56.649 } 00:07:56.649 ] 00:07:56.649 }' 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.649 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.217 [2024-12-12 19:36:39.887515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.217 [2024-12-12 19:36:39.887610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.217 [2024-12-12 19:36:39.887637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:57.217 [2024-12-12 19:36:39.887650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.217 [2024-12-12 19:36:39.888192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.217 [2024-12-12 19:36:39.888225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.217 [2024-12-12 19:36:39.888317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.217 [2024-12-12 19:36:39.888348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.217 [2024-12-12 19:36:39.888500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.217 [2024-12-12 19:36:39.888513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.217 [2024-12-12 19:36:39.888805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:57.217 [2024-12-12 19:36:39.888993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.217 [2024-12-12 19:36:39.889003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:57.217 [2024-12-12 19:36:39.889179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.217 pt2 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.217 "name": "raid_bdev1", 00:07:57.217 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:57.217 "strip_size_kb": 0, 00:07:57.217 "state": "online", 00:07:57.217 "raid_level": "raid1", 00:07:57.217 "superblock": true, 00:07:57.217 "num_base_bdevs": 2, 00:07:57.217 "num_base_bdevs_discovered": 2, 00:07:57.217 "num_base_bdevs_operational": 2, 00:07:57.217 "base_bdevs_list": [ 00:07:57.217 { 00:07:57.217 "name": "pt1", 00:07:57.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.217 "is_configured": true, 00:07:57.217 "data_offset": 2048, 00:07:57.217 "data_size": 63488 00:07:57.217 }, 00:07:57.217 { 00:07:57.217 "name": "pt2", 00:07:57.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.217 "is_configured": true, 00:07:57.217 "data_offset": 2048, 00:07:57.217 "data_size": 63488 00:07:57.217 } 00:07:57.217 ] 00:07:57.217 }' 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.217 19:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.784 [2024-12-12 19:36:40.339061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.784 "name": "raid_bdev1", 00:07:57.784 "aliases": [ 00:07:57.784 "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f" 00:07:57.784 ], 00:07:57.784 "product_name": "Raid Volume", 00:07:57.784 "block_size": 512, 00:07:57.784 "num_blocks": 63488, 00:07:57.784 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:57.784 "assigned_rate_limits": { 00:07:57.784 "rw_ios_per_sec": 0, 00:07:57.784 "rw_mbytes_per_sec": 0, 00:07:57.784 "r_mbytes_per_sec": 0, 00:07:57.784 "w_mbytes_per_sec": 0 00:07:57.784 }, 00:07:57.784 "claimed": false, 00:07:57.784 "zoned": false, 00:07:57.784 "supported_io_types": { 00:07:57.784 "read": true, 00:07:57.784 "write": true, 00:07:57.784 "unmap": false, 00:07:57.784 "flush": false, 00:07:57.784 "reset": true, 00:07:57.784 "nvme_admin": false, 00:07:57.784 "nvme_io": false, 00:07:57.784 "nvme_io_md": false, 00:07:57.784 "write_zeroes": true, 00:07:57.784 "zcopy": false, 00:07:57.784 "get_zone_info": false, 00:07:57.784 "zone_management": false, 00:07:57.784 "zone_append": false, 00:07:57.784 "compare": false, 00:07:57.784 "compare_and_write": false, 00:07:57.784 "abort": false, 00:07:57.784 "seek_hole": false, 00:07:57.784 "seek_data": false, 00:07:57.784 "copy": false, 00:07:57.784 "nvme_iov_md": false 00:07:57.784 }, 00:07:57.784 "memory_domains": [ 00:07:57.784 { 00:07:57.784 "dma_device_id": "system", 00:07:57.784 "dma_device_type": 1 00:07:57.784 }, 00:07:57.784 { 00:07:57.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.784 "dma_device_type": 2 00:07:57.784 }, 00:07:57.784 { 00:07:57.784 "dma_device_id": "system", 00:07:57.784 "dma_device_type": 1 00:07:57.784 }, 00:07:57.784 { 00:07:57.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.784 "dma_device_type": 2 00:07:57.784 } 00:07:57.784 ], 00:07:57.784 "driver_specific": { 00:07:57.784 "raid": { 00:07:57.784 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:57.784 "strip_size_kb": 0, 00:07:57.784 "state": "online", 00:07:57.784 "raid_level": "raid1", 00:07:57.784 "superblock": true, 00:07:57.784 "num_base_bdevs": 2, 00:07:57.784 "num_base_bdevs_discovered": 2, 00:07:57.784 "num_base_bdevs_operational": 2, 00:07:57.784 "base_bdevs_list": [ 00:07:57.784 { 00:07:57.784 "name": "pt1", 00:07:57.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.784 "is_configured": true, 00:07:57.784 "data_offset": 2048, 00:07:57.784 "data_size": 63488 00:07:57.784 }, 00:07:57.784 { 00:07:57.784 "name": "pt2", 00:07:57.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.784 "is_configured": true, 00:07:57.784 "data_offset": 2048, 00:07:57.784 "data_size": 63488 00:07:57.784 } 00:07:57.784 ] 00:07:57.784 } 00:07:57.784 } 00:07:57.784 }' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.784 pt2' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:57.784 [2024-12-12 19:36:40.578785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c6f3fb14-9d3a-4022-8db3-c7beb5ef619f '!=' c6f3fb14-9d3a-4022-8db3-c7beb5ef619f ']' 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.042 [2024-12-12 19:36:40.630352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.042 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.043 "name": "raid_bdev1", 00:07:58.043 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:58.043 "strip_size_kb": 0, 00:07:58.043 "state": "online", 00:07:58.043 "raid_level": "raid1", 00:07:58.043 "superblock": true, 00:07:58.043 "num_base_bdevs": 2, 00:07:58.043 "num_base_bdevs_discovered": 1, 00:07:58.043 "num_base_bdevs_operational": 1, 00:07:58.043 "base_bdevs_list": [ 00:07:58.043 { 00:07:58.043 "name": null, 00:07:58.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.043 "is_configured": false, 00:07:58.043 "data_offset": 0, 00:07:58.043 "data_size": 63488 00:07:58.043 }, 00:07:58.043 { 00:07:58.043 "name": "pt2", 00:07:58.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.043 "is_configured": true, 00:07:58.043 "data_offset": 2048, 00:07:58.043 "data_size": 63488 00:07:58.043 } 00:07:58.043 ] 00:07:58.043 }' 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.043 19:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 [2024-12-12 19:36:41.125537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.300 [2024-12-12 19:36:41.125642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.300 [2024-12-12 19:36:41.125777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.300 [2024-12-12 19:36:41.125866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.300 [2024-12-12 19:36:41.125935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:58.300 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.558 [2024-12-12 19:36:41.193455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.558 [2024-12-12 19:36:41.193588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.558 [2024-12-12 19:36:41.193613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:58.558 [2024-12-12 19:36:41.193625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.558 [2024-12-12 19:36:41.195861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.558 [2024-12-12 19:36:41.195903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.558 [2024-12-12 19:36:41.195990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.558 [2024-12-12 19:36:41.196040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.558 [2024-12-12 19:36:41.196143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:58.558 [2024-12-12 19:36:41.196155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.558 [2024-12-12 19:36:41.196400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:58.558 [2024-12-12 19:36:41.196590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:58.558 [2024-12-12 19:36:41.196602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:58.558 [2024-12-12 19:36:41.196750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.558 pt2 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.558 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.558 "name": "raid_bdev1", 00:07:58.558 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:58.558 "strip_size_kb": 0, 00:07:58.558 "state": "online", 00:07:58.558 "raid_level": "raid1", 00:07:58.558 "superblock": true, 00:07:58.558 "num_base_bdevs": 2, 00:07:58.558 "num_base_bdevs_discovered": 1, 00:07:58.558 "num_base_bdevs_operational": 1, 00:07:58.558 "base_bdevs_list": [ 00:07:58.558 { 00:07:58.558 "name": null, 00:07:58.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.558 "is_configured": false, 00:07:58.558 "data_offset": 2048, 00:07:58.558 "data_size": 63488 00:07:58.558 }, 00:07:58.558 { 00:07:58.559 "name": "pt2", 00:07:58.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.559 "is_configured": true, 00:07:58.559 "data_offset": 2048, 00:07:58.559 "data_size": 63488 00:07:58.559 } 00:07:58.559 ] 00:07:58.559 }' 00:07:58.559 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.559 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 [2024-12-12 19:36:41.696600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.125 [2024-12-12 19:36:41.696686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.125 [2024-12-12 19:36:41.696800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.125 [2024-12-12 19:36:41.696891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.125 [2024-12-12 19:36:41.696946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 [2024-12-12 19:36:41.756508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.125 [2024-12-12 19:36:41.756624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.125 [2024-12-12 19:36:41.756693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:59.125 [2024-12-12 19:36:41.756727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.125 [2024-12-12 19:36:41.759093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.125 [2024-12-12 19:36:41.759167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.125 [2024-12-12 19:36:41.759312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.125 [2024-12-12 19:36:41.759412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.125 [2024-12-12 19:36:41.759637] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:59.125 [2024-12-12 19:36:41.759700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.125 [2024-12-12 19:36:41.759774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:59.125 [2024-12-12 19:36:41.759895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.125 [2024-12-12 19:36:41.760022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:59.125 [2024-12-12 19:36:41.760063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.125 [2024-12-12 19:36:41.760386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:59.125 [2024-12-12 19:36:41.760610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:59.125 [2024-12-12 19:36:41.760661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:59.125 [2024-12-12 19:36:41.760948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.125 pt1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.125 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.125 "name": "raid_bdev1", 00:07:59.125 "uuid": "c6f3fb14-9d3a-4022-8db3-c7beb5ef619f", 00:07:59.125 "strip_size_kb": 0, 00:07:59.125 "state": "online", 00:07:59.125 "raid_level": "raid1", 00:07:59.126 "superblock": true, 00:07:59.126 "num_base_bdevs": 2, 00:07:59.126 "num_base_bdevs_discovered": 1, 00:07:59.126 "num_base_bdevs_operational": 1, 00:07:59.126 "base_bdevs_list": [ 00:07:59.126 { 00:07:59.126 "name": null, 00:07:59.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.126 "is_configured": false, 00:07:59.126 "data_offset": 2048, 00:07:59.126 "data_size": 63488 00:07:59.126 }, 00:07:59.126 { 00:07:59.126 "name": "pt2", 00:07:59.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.126 "is_configured": true, 00:07:59.126 "data_offset": 2048, 00:07:59.126 "data_size": 63488 00:07:59.126 } 00:07:59.126 ] 00:07:59.126 }' 00:07:59.126 19:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.126 19:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.384 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:59.384 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:59.384 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.385 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.385 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.643 [2024-12-12 19:36:42.244269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c6f3fb14-9d3a-4022-8db3-c7beb5ef619f '!=' c6f3fb14-9d3a-4022-8db3-c7beb5ef619f ']' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64897 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64897 ']' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64897 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64897 00:07:59.643 killing process with pid 64897 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64897' 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64897 00:07:59.643 [2024-12-12 19:36:42.310273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.643 [2024-12-12 19:36:42.310355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.643 [2024-12-12 19:36:42.310405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.643 [2024-12-12 19:36:42.310418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:59.643 19:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64897 00:07:59.902 [2024-12-12 19:36:42.514666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.278 19:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.278 00:08:01.278 real 0m6.414s 00:08:01.278 user 0m9.684s 00:08:01.278 sys 0m1.074s 00:08:01.278 ************************************ 00:08:01.278 END TEST raid_superblock_test 00:08:01.278 ************************************ 00:08:01.278 19:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.278 19:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.278 19:36:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:01.278 19:36:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.278 19:36:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.278 19:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.278 ************************************ 00:08:01.278 START TEST raid_read_error_test 00:08:01.278 ************************************ 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.278 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mgapGnICiu 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65227 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65227 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65227 ']' 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.279 19:36:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.279 [2024-12-12 19:36:43.991487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:01.279 [2024-12-12 19:36:43.991762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65227 ] 00:08:01.540 [2024-12-12 19:36:44.172883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.540 [2024-12-12 19:36:44.314182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.804 [2024-12-12 19:36:44.544077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.804 [2024-12-12 19:36:44.544246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 BaseBdev1_malloc 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 true 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 [2024-12-12 19:36:44.979962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.373 [2024-12-12 19:36:44.980117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.373 [2024-12-12 19:36:44.980150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.373 [2024-12-12 19:36:44.980166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.373 [2024-12-12 19:36:44.982764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.373 [2024-12-12 19:36:44.982822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.373 BaseBdev1 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.373 19:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 BaseBdev2_malloc 00:08:02.373 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.373 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.373 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.374 true 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.374 [2024-12-12 19:36:45.038283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.374 [2024-12-12 19:36:45.038443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.374 [2024-12-12 19:36:45.038493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.374 [2024-12-12 19:36:45.038523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.374 [2024-12-12 19:36:45.041037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.374 [2024-12-12 19:36:45.041083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.374 BaseBdev2 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.374 [2024-12-12 19:36:45.050331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.374 [2024-12-12 19:36:45.052689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.374 [2024-12-12 19:36:45.052944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.374 [2024-12-12 19:36:45.052965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.374 [2024-12-12 19:36:45.053267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:02.374 [2024-12-12 19:36:45.053500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.374 [2024-12-12 19:36:45.053515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.374 [2024-12-12 19:36:45.053721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.374 "name": "raid_bdev1", 00:08:02.374 "uuid": "ab6702ba-5349-434b-98eb-6b3817885062", 00:08:02.374 "strip_size_kb": 0, 00:08:02.374 "state": "online", 00:08:02.374 "raid_level": "raid1", 00:08:02.374 "superblock": true, 00:08:02.374 "num_base_bdevs": 2, 00:08:02.374 "num_base_bdevs_discovered": 2, 00:08:02.374 "num_base_bdevs_operational": 2, 00:08:02.374 "base_bdevs_list": [ 00:08:02.374 { 00:08:02.374 "name": "BaseBdev1", 00:08:02.374 "uuid": "4c27b3ab-a95c-5e11-973e-3d4f53b23e40", 00:08:02.374 "is_configured": true, 00:08:02.374 "data_offset": 2048, 00:08:02.374 "data_size": 63488 00:08:02.374 }, 00:08:02.374 { 00:08:02.374 "name": "BaseBdev2", 00:08:02.374 "uuid": "b26f0ad4-4049-5d8b-9099-6ccca6798100", 00:08:02.374 "is_configured": true, 00:08:02.374 "data_offset": 2048, 00:08:02.374 "data_size": 63488 00:08:02.374 } 00:08:02.374 ] 00:08:02.374 }' 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.374 19:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:02.632 19:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:02.891 [2024-12-12 19:36:45.586916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:03.826 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.827 "name": "raid_bdev1", 00:08:03.827 "uuid": "ab6702ba-5349-434b-98eb-6b3817885062", 00:08:03.827 "strip_size_kb": 0, 00:08:03.827 "state": "online", 00:08:03.827 "raid_level": "raid1", 00:08:03.827 "superblock": true, 00:08:03.827 "num_base_bdevs": 2, 00:08:03.827 "num_base_bdevs_discovered": 2, 00:08:03.827 "num_base_bdevs_operational": 2, 00:08:03.827 "base_bdevs_list": [ 00:08:03.827 { 00:08:03.827 "name": "BaseBdev1", 00:08:03.827 "uuid": "4c27b3ab-a95c-5e11-973e-3d4f53b23e40", 00:08:03.827 "is_configured": true, 00:08:03.827 "data_offset": 2048, 00:08:03.827 "data_size": 63488 00:08:03.827 }, 00:08:03.827 { 00:08:03.827 "name": "BaseBdev2", 00:08:03.827 "uuid": "b26f0ad4-4049-5d8b-9099-6ccca6798100", 00:08:03.827 "is_configured": true, 00:08:03.827 "data_offset": 2048, 00:08:03.827 "data_size": 63488 00:08:03.827 } 00:08:03.827 ] 00:08:03.827 }' 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.827 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.086 [2024-12-12 19:36:46.915948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.086 [2024-12-12 19:36:46.915990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.086 [2024-12-12 19:36:46.919323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.086 [2024-12-12 19:36:46.919386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.086 [2024-12-12 19:36:46.919493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.086 [2024-12-12 19:36:46.919509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.086 { 00:08:04.086 "results": [ 00:08:04.086 { 00:08:04.086 "job": "raid_bdev1", 00:08:04.086 "core_mask": "0x1", 00:08:04.086 "workload": "randrw", 00:08:04.086 "percentage": 50, 00:08:04.086 "status": "finished", 00:08:04.086 "queue_depth": 1, 00:08:04.086 "io_size": 131072, 00:08:04.086 "runtime": 1.329284, 00:08:04.086 "iops": 14254.290279579081, 00:08:04.086 "mibps": 1781.7862849473852, 00:08:04.086 "io_failed": 0, 00:08:04.086 "io_timeout": 0, 00:08:04.086 "avg_latency_us": 66.67110483022715, 00:08:04.086 "min_latency_us": 25.823580786026202, 00:08:04.086 "max_latency_us": 1810.1100436681222 00:08:04.086 } 00:08:04.086 ], 00:08:04.086 "core_count": 1 00:08:04.086 } 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65227 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65227 ']' 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65227 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.086 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65227 00:08:04.344 killing process with pid 65227 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65227' 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65227 00:08:04.344 [2024-12-12 19:36:46.962531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.344 19:36:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65227 00:08:04.344 [2024-12-12 19:36:47.102609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mgapGnICiu 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:05.716 ************************************ 00:08:05.716 END TEST raid_read_error_test 00:08:05.716 ************************************ 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:05.716 00:08:05.716 real 0m4.500s 00:08:05.716 user 0m5.449s 00:08:05.716 sys 0m0.520s 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.716 19:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.716 19:36:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:05.716 19:36:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.716 19:36:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.716 19:36:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.716 ************************************ 00:08:05.716 START TEST raid_write_error_test 00:08:05.716 ************************************ 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jn2e6lF47z 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65373 00:08:05.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65373 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65373 ']' 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.716 19:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.716 [2024-12-12 19:36:48.531993] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:05.716 [2024-12-12 19:36:48.532111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65373 ] 00:08:05.973 [2024-12-12 19:36:48.703374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.230 [2024-12-12 19:36:48.820856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.230 [2024-12-12 19:36:49.021451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.230 [2024-12-12 19:36:49.021564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.796 BaseBdev1_malloc 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.796 true 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.796 [2024-12-12 19:36:49.420679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.796 [2024-12-12 19:36:49.420738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.796 [2024-12-12 19:36:49.420758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.796 [2024-12-12 19:36:49.420769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.796 [2024-12-12 19:36:49.422876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.796 [2024-12-12 19:36:49.422919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.796 BaseBdev1 00:08:06.796 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 BaseBdev2_malloc 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 true 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 [2024-12-12 19:36:49.484179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.797 [2024-12-12 19:36:49.484235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.797 [2024-12-12 19:36:49.484267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.797 [2024-12-12 19:36:49.484279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.797 [2024-12-12 19:36:49.486489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.797 [2024-12-12 19:36:49.486531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.797 BaseBdev2 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 [2024-12-12 19:36:49.496197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.797 [2024-12-12 19:36:49.498048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.797 [2024-12-12 19:36:49.498249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.797 [2024-12-12 19:36:49.498265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.797 [2024-12-12 19:36:49.498496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:06.797 [2024-12-12 19:36:49.498710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.797 [2024-12-12 19:36:49.498721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.797 [2024-12-12 19:36:49.498863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.797 "name": "raid_bdev1", 00:08:06.797 "uuid": "657c2de6-6709-4090-bfd6-0d9106fc0328", 00:08:06.797 "strip_size_kb": 0, 00:08:06.797 "state": "online", 00:08:06.797 "raid_level": "raid1", 00:08:06.797 "superblock": true, 00:08:06.797 "num_base_bdevs": 2, 00:08:06.797 "num_base_bdevs_discovered": 2, 00:08:06.797 "num_base_bdevs_operational": 2, 00:08:06.797 "base_bdevs_list": [ 00:08:06.797 { 00:08:06.797 "name": "BaseBdev1", 00:08:06.797 "uuid": "bc6eeebe-9d7c-5410-aefe-46ae6051d425", 00:08:06.797 "is_configured": true, 00:08:06.797 "data_offset": 2048, 00:08:06.797 "data_size": 63488 00:08:06.797 }, 00:08:06.797 { 00:08:06.797 "name": "BaseBdev2", 00:08:06.797 "uuid": "8f327181-eb86-513e-bc02-f35e4e97b8db", 00:08:06.797 "is_configured": true, 00:08:06.797 "data_offset": 2048, 00:08:06.797 "data_size": 63488 00:08:06.797 } 00:08:06.797 ] 00:08:06.797 }' 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.797 19:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.365 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.365 19:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.365 [2024-12-12 19:36:50.052676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.300 [2024-12-12 19:36:50.960843] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:08.300 [2024-12-12 19:36:50.961010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.300 [2024-12-12 19:36:50.961250] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.300 19:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.300 19:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.300 "name": "raid_bdev1", 00:08:08.300 "uuid": "657c2de6-6709-4090-bfd6-0d9106fc0328", 00:08:08.300 "strip_size_kb": 0, 00:08:08.300 "state": "online", 00:08:08.300 "raid_level": "raid1", 00:08:08.300 "superblock": true, 00:08:08.300 "num_base_bdevs": 2, 00:08:08.300 "num_base_bdevs_discovered": 1, 00:08:08.300 "num_base_bdevs_operational": 1, 00:08:08.300 "base_bdevs_list": [ 00:08:08.300 { 00:08:08.300 "name": null, 00:08:08.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.300 "is_configured": false, 00:08:08.300 "data_offset": 0, 00:08:08.300 "data_size": 63488 00:08:08.300 }, 00:08:08.300 { 00:08:08.300 "name": "BaseBdev2", 00:08:08.300 "uuid": "8f327181-eb86-513e-bc02-f35e4e97b8db", 00:08:08.300 "is_configured": true, 00:08:08.300 "data_offset": 2048, 00:08:08.300 "data_size": 63488 00:08:08.300 } 00:08:08.300 ] 00:08:08.300 }' 00:08:08.300 19:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.300 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.868 [2024-12-12 19:36:51.454193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.868 [2024-12-12 19:36:51.454296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.868 [2024-12-12 19:36:51.457240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.868 [2024-12-12 19:36:51.457319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.868 [2024-12-12 19:36:51.457407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.868 [2024-12-12 19:36:51.457475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.868 { 00:08:08.868 "results": [ 00:08:08.868 { 00:08:08.868 "job": "raid_bdev1", 00:08:08.868 "core_mask": "0x1", 00:08:08.868 "workload": "randrw", 00:08:08.868 "percentage": 50, 00:08:08.868 "status": "finished", 00:08:08.868 "queue_depth": 1, 00:08:08.868 "io_size": 131072, 00:08:08.868 "runtime": 1.402313, 00:08:08.868 "iops": 19630.424876614565, 00:08:08.868 "mibps": 2453.8031095768206, 00:08:08.868 "io_failed": 0, 00:08:08.868 "io_timeout": 0, 00:08:08.868 "avg_latency_us": 48.03974674773378, 00:08:08.868 "min_latency_us": 23.699563318777294, 00:08:08.868 "max_latency_us": 1638.4 00:08:08.868 } 00:08:08.868 ], 00:08:08.868 "core_count": 1 00:08:08.868 } 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65373 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65373 ']' 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65373 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65373 00:08:08.868 killing process with pid 65373 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65373' 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65373 00:08:08.868 [2024-12-12 19:36:51.499435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.868 19:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65373 00:08:08.868 [2024-12-12 19:36:51.637732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jn2e6lF47z 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.250 ************************************ 00:08:10.250 END TEST raid_write_error_test 00:08:10.250 ************************************ 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:10.250 00:08:10.250 real 0m4.433s 00:08:10.250 user 0m5.350s 00:08:10.250 sys 0m0.542s 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.250 19:36:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.250 19:36:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:10.250 19:36:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:10.250 19:36:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:10.250 19:36:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.250 19:36:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.250 19:36:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.250 ************************************ 00:08:10.250 START TEST raid_state_function_test 00:08:10.250 ************************************ 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65511 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.250 Process raid pid: 65511 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65511' 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65511 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65511 ']' 00:08:10.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.250 19:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.250 [2024-12-12 19:36:53.027888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.250 [2024-12-12 19:36:53.028004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.516 [2024-12-12 19:36:53.183760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.516 [2024-12-12 19:36:53.331574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.774 [2024-12-12 19:36:53.549041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.774 [2024-12-12 19:36:53.549098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.339 [2024-12-12 19:36:53.924983] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.339 [2024-12-12 19:36:53.925048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.339 [2024-12-12 19:36:53.925066] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.339 [2024-12-12 19:36:53.925083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.339 [2024-12-12 19:36:53.925095] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.339 [2024-12-12 19:36:53.925111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.339 "name": "Existed_Raid", 00:08:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.339 "strip_size_kb": 64, 00:08:11.339 "state": "configuring", 00:08:11.339 "raid_level": "raid0", 00:08:11.339 "superblock": false, 00:08:11.339 "num_base_bdevs": 3, 00:08:11.339 "num_base_bdevs_discovered": 0, 00:08:11.339 "num_base_bdevs_operational": 3, 00:08:11.339 "base_bdevs_list": [ 00:08:11.339 { 00:08:11.339 "name": "BaseBdev1", 00:08:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.339 "is_configured": false, 00:08:11.339 "data_offset": 0, 00:08:11.339 "data_size": 0 00:08:11.339 }, 00:08:11.339 { 00:08:11.339 "name": "BaseBdev2", 00:08:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.339 "is_configured": false, 00:08:11.339 "data_offset": 0, 00:08:11.339 "data_size": 0 00:08:11.339 }, 00:08:11.339 { 00:08:11.339 "name": "BaseBdev3", 00:08:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.339 "is_configured": false, 00:08:11.339 "data_offset": 0, 00:08:11.339 "data_size": 0 00:08:11.339 } 00:08:11.339 ] 00:08:11.339 }' 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.339 19:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 [2024-12-12 19:36:54.316448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.599 [2024-12-12 19:36:54.316487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 [2024-12-12 19:36:54.328420] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.599 [2024-12-12 19:36:54.328506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.599 [2024-12-12 19:36:54.328537] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.599 [2024-12-12 19:36:54.328594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.599 [2024-12-12 19:36:54.328626] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.599 [2024-12-12 19:36:54.328648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 [2024-12-12 19:36:54.387169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.599 BaseBdev1 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 [ 00:08:11.599 { 00:08:11.599 "name": "BaseBdev1", 00:08:11.599 "aliases": [ 00:08:11.599 "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0" 00:08:11.599 ], 00:08:11.599 "product_name": "Malloc disk", 00:08:11.599 "block_size": 512, 00:08:11.599 "num_blocks": 65536, 00:08:11.599 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:11.599 "assigned_rate_limits": { 00:08:11.599 "rw_ios_per_sec": 0, 00:08:11.599 "rw_mbytes_per_sec": 0, 00:08:11.599 "r_mbytes_per_sec": 0, 00:08:11.599 "w_mbytes_per_sec": 0 00:08:11.599 }, 00:08:11.599 "claimed": true, 00:08:11.599 "claim_type": "exclusive_write", 00:08:11.599 "zoned": false, 00:08:11.599 "supported_io_types": { 00:08:11.599 "read": true, 00:08:11.599 "write": true, 00:08:11.599 "unmap": true, 00:08:11.599 "flush": true, 00:08:11.599 "reset": true, 00:08:11.599 "nvme_admin": false, 00:08:11.599 "nvme_io": false, 00:08:11.599 "nvme_io_md": false, 00:08:11.599 "write_zeroes": true, 00:08:11.599 "zcopy": true, 00:08:11.599 "get_zone_info": false, 00:08:11.599 "zone_management": false, 00:08:11.599 "zone_append": false, 00:08:11.599 "compare": false, 00:08:11.599 "compare_and_write": false, 00:08:11.599 "abort": true, 00:08:11.599 "seek_hole": false, 00:08:11.599 "seek_data": false, 00:08:11.599 "copy": true, 00:08:11.599 "nvme_iov_md": false 00:08:11.599 }, 00:08:11.599 "memory_domains": [ 00:08:11.599 { 00:08:11.599 "dma_device_id": "system", 00:08:11.599 "dma_device_type": 1 00:08:11.599 }, 00:08:11.599 { 00:08:11.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.599 "dma_device_type": 2 00:08:11.599 } 00:08:11.599 ], 00:08:11.599 "driver_specific": {} 00:08:11.599 } 00:08:11.599 ] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.599 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.859 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.859 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.859 "name": "Existed_Raid", 00:08:11.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.859 "strip_size_kb": 64, 00:08:11.859 "state": "configuring", 00:08:11.859 "raid_level": "raid0", 00:08:11.859 "superblock": false, 00:08:11.859 "num_base_bdevs": 3, 00:08:11.859 "num_base_bdevs_discovered": 1, 00:08:11.859 "num_base_bdevs_operational": 3, 00:08:11.859 "base_bdevs_list": [ 00:08:11.859 { 00:08:11.859 "name": "BaseBdev1", 00:08:11.859 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:11.859 "is_configured": true, 00:08:11.859 "data_offset": 0, 00:08:11.859 "data_size": 65536 00:08:11.859 }, 00:08:11.859 { 00:08:11.859 "name": "BaseBdev2", 00:08:11.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.859 "is_configured": false, 00:08:11.859 "data_offset": 0, 00:08:11.859 "data_size": 0 00:08:11.859 }, 00:08:11.859 { 00:08:11.859 "name": "BaseBdev3", 00:08:11.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.859 "is_configured": false, 00:08:11.859 "data_offset": 0, 00:08:11.859 "data_size": 0 00:08:11.859 } 00:08:11.859 ] 00:08:11.859 }' 00:08:11.859 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.859 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.119 [2024-12-12 19:36:54.850468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.119 [2024-12-12 19:36:54.850604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.119 [2024-12-12 19:36:54.862480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.119 [2024-12-12 19:36:54.864303] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.119 [2024-12-12 19:36:54.864379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.119 [2024-12-12 19:36:54.864424] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:12.119 [2024-12-12 19:36:54.864446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.119 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.119 "name": "Existed_Raid", 00:08:12.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.120 "strip_size_kb": 64, 00:08:12.120 "state": "configuring", 00:08:12.120 "raid_level": "raid0", 00:08:12.120 "superblock": false, 00:08:12.120 "num_base_bdevs": 3, 00:08:12.120 "num_base_bdevs_discovered": 1, 00:08:12.120 "num_base_bdevs_operational": 3, 00:08:12.120 "base_bdevs_list": [ 00:08:12.120 { 00:08:12.120 "name": "BaseBdev1", 00:08:12.120 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:12.120 "is_configured": true, 00:08:12.120 "data_offset": 0, 00:08:12.120 "data_size": 65536 00:08:12.120 }, 00:08:12.120 { 00:08:12.120 "name": "BaseBdev2", 00:08:12.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.120 "is_configured": false, 00:08:12.120 "data_offset": 0, 00:08:12.120 "data_size": 0 00:08:12.120 }, 00:08:12.120 { 00:08:12.120 "name": "BaseBdev3", 00:08:12.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.120 "is_configured": false, 00:08:12.120 "data_offset": 0, 00:08:12.120 "data_size": 0 00:08:12.120 } 00:08:12.120 ] 00:08:12.120 }' 00:08:12.120 19:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.120 19:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 [2024-12-12 19:36:55.344397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.690 BaseBdev2 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 [ 00:08:12.690 { 00:08:12.690 "name": "BaseBdev2", 00:08:12.690 "aliases": [ 00:08:12.690 "4c7a0749-a669-4617-9d29-4ba213f1a769" 00:08:12.690 ], 00:08:12.690 "product_name": "Malloc disk", 00:08:12.690 "block_size": 512, 00:08:12.690 "num_blocks": 65536, 00:08:12.690 "uuid": "4c7a0749-a669-4617-9d29-4ba213f1a769", 00:08:12.690 "assigned_rate_limits": { 00:08:12.690 "rw_ios_per_sec": 0, 00:08:12.690 "rw_mbytes_per_sec": 0, 00:08:12.690 "r_mbytes_per_sec": 0, 00:08:12.690 "w_mbytes_per_sec": 0 00:08:12.690 }, 00:08:12.690 "claimed": true, 00:08:12.690 "claim_type": "exclusive_write", 00:08:12.690 "zoned": false, 00:08:12.690 "supported_io_types": { 00:08:12.690 "read": true, 00:08:12.690 "write": true, 00:08:12.690 "unmap": true, 00:08:12.690 "flush": true, 00:08:12.690 "reset": true, 00:08:12.690 "nvme_admin": false, 00:08:12.690 "nvme_io": false, 00:08:12.690 "nvme_io_md": false, 00:08:12.690 "write_zeroes": true, 00:08:12.690 "zcopy": true, 00:08:12.690 "get_zone_info": false, 00:08:12.690 "zone_management": false, 00:08:12.690 "zone_append": false, 00:08:12.690 "compare": false, 00:08:12.690 "compare_and_write": false, 00:08:12.690 "abort": true, 00:08:12.690 "seek_hole": false, 00:08:12.690 "seek_data": false, 00:08:12.690 "copy": true, 00:08:12.690 "nvme_iov_md": false 00:08:12.690 }, 00:08:12.690 "memory_domains": [ 00:08:12.690 { 00:08:12.690 "dma_device_id": "system", 00:08:12.690 "dma_device_type": 1 00:08:12.690 }, 00:08:12.690 { 00:08:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.690 "dma_device_type": 2 00:08:12.690 } 00:08:12.690 ], 00:08:12.690 "driver_specific": {} 00:08:12.690 } 00:08:12.690 ] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.690 "name": "Existed_Raid", 00:08:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.690 "strip_size_kb": 64, 00:08:12.690 "state": "configuring", 00:08:12.690 "raid_level": "raid0", 00:08:12.690 "superblock": false, 00:08:12.690 "num_base_bdevs": 3, 00:08:12.690 "num_base_bdevs_discovered": 2, 00:08:12.690 "num_base_bdevs_operational": 3, 00:08:12.690 "base_bdevs_list": [ 00:08:12.690 { 00:08:12.690 "name": "BaseBdev1", 00:08:12.690 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:12.690 "is_configured": true, 00:08:12.690 "data_offset": 0, 00:08:12.690 "data_size": 65536 00:08:12.690 }, 00:08:12.690 { 00:08:12.690 "name": "BaseBdev2", 00:08:12.690 "uuid": "4c7a0749-a669-4617-9d29-4ba213f1a769", 00:08:12.690 "is_configured": true, 00:08:12.690 "data_offset": 0, 00:08:12.690 "data_size": 65536 00:08:12.690 }, 00:08:12.690 { 00:08:12.690 "name": "BaseBdev3", 00:08:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.690 "is_configured": false, 00:08:12.690 "data_offset": 0, 00:08:12.690 "data_size": 0 00:08:12.690 } 00:08:12.690 ] 00:08:12.690 }' 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.690 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.949 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.949 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.949 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.209 [2024-12-12 19:36:55.832180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.209 [2024-12-12 19:36:55.832222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.209 [2024-12-12 19:36:55.832235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:13.209 [2024-12-12 19:36:55.832482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:13.209 [2024-12-12 19:36:55.832683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.209 [2024-12-12 19:36:55.832694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:13.209 [2024-12-12 19:36:55.833000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.209 BaseBdev3 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.209 [ 00:08:13.209 { 00:08:13.209 "name": "BaseBdev3", 00:08:13.209 "aliases": [ 00:08:13.209 "c2a7d352-cd98-499e-b12c-cf3db01b1702" 00:08:13.209 ], 00:08:13.209 "product_name": "Malloc disk", 00:08:13.209 "block_size": 512, 00:08:13.209 "num_blocks": 65536, 00:08:13.209 "uuid": "c2a7d352-cd98-499e-b12c-cf3db01b1702", 00:08:13.209 "assigned_rate_limits": { 00:08:13.209 "rw_ios_per_sec": 0, 00:08:13.209 "rw_mbytes_per_sec": 0, 00:08:13.209 "r_mbytes_per_sec": 0, 00:08:13.209 "w_mbytes_per_sec": 0 00:08:13.209 }, 00:08:13.209 "claimed": true, 00:08:13.209 "claim_type": "exclusive_write", 00:08:13.209 "zoned": false, 00:08:13.209 "supported_io_types": { 00:08:13.209 "read": true, 00:08:13.209 "write": true, 00:08:13.209 "unmap": true, 00:08:13.209 "flush": true, 00:08:13.209 "reset": true, 00:08:13.209 "nvme_admin": false, 00:08:13.209 "nvme_io": false, 00:08:13.209 "nvme_io_md": false, 00:08:13.209 "write_zeroes": true, 00:08:13.209 "zcopy": true, 00:08:13.209 "get_zone_info": false, 00:08:13.209 "zone_management": false, 00:08:13.209 "zone_append": false, 00:08:13.209 "compare": false, 00:08:13.209 "compare_and_write": false, 00:08:13.209 "abort": true, 00:08:13.209 "seek_hole": false, 00:08:13.209 "seek_data": false, 00:08:13.209 "copy": true, 00:08:13.209 "nvme_iov_md": false 00:08:13.209 }, 00:08:13.209 "memory_domains": [ 00:08:13.209 { 00:08:13.209 "dma_device_id": "system", 00:08:13.209 "dma_device_type": 1 00:08:13.209 }, 00:08:13.209 { 00:08:13.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.209 "dma_device_type": 2 00:08:13.209 } 00:08:13.209 ], 00:08:13.209 "driver_specific": {} 00:08:13.209 } 00:08:13.209 ] 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:13.209 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.210 "name": "Existed_Raid", 00:08:13.210 "uuid": "9fdc22f5-219b-436a-affa-820ea4171d00", 00:08:13.210 "strip_size_kb": 64, 00:08:13.210 "state": "online", 00:08:13.210 "raid_level": "raid0", 00:08:13.210 "superblock": false, 00:08:13.210 "num_base_bdevs": 3, 00:08:13.210 "num_base_bdevs_discovered": 3, 00:08:13.210 "num_base_bdevs_operational": 3, 00:08:13.210 "base_bdevs_list": [ 00:08:13.210 { 00:08:13.210 "name": "BaseBdev1", 00:08:13.210 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:13.210 "is_configured": true, 00:08:13.210 "data_offset": 0, 00:08:13.210 "data_size": 65536 00:08:13.210 }, 00:08:13.210 { 00:08:13.210 "name": "BaseBdev2", 00:08:13.210 "uuid": "4c7a0749-a669-4617-9d29-4ba213f1a769", 00:08:13.210 "is_configured": true, 00:08:13.210 "data_offset": 0, 00:08:13.210 "data_size": 65536 00:08:13.210 }, 00:08:13.210 { 00:08:13.210 "name": "BaseBdev3", 00:08:13.210 "uuid": "c2a7d352-cd98-499e-b12c-cf3db01b1702", 00:08:13.210 "is_configured": true, 00:08:13.210 "data_offset": 0, 00:08:13.210 "data_size": 65536 00:08:13.210 } 00:08:13.210 ] 00:08:13.210 }' 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.210 19:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.470 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.730 [2024-12-12 19:36:56.323761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.730 "name": "Existed_Raid", 00:08:13.730 "aliases": [ 00:08:13.730 "9fdc22f5-219b-436a-affa-820ea4171d00" 00:08:13.730 ], 00:08:13.730 "product_name": "Raid Volume", 00:08:13.730 "block_size": 512, 00:08:13.730 "num_blocks": 196608, 00:08:13.730 "uuid": "9fdc22f5-219b-436a-affa-820ea4171d00", 00:08:13.730 "assigned_rate_limits": { 00:08:13.730 "rw_ios_per_sec": 0, 00:08:13.730 "rw_mbytes_per_sec": 0, 00:08:13.730 "r_mbytes_per_sec": 0, 00:08:13.730 "w_mbytes_per_sec": 0 00:08:13.730 }, 00:08:13.730 "claimed": false, 00:08:13.730 "zoned": false, 00:08:13.730 "supported_io_types": { 00:08:13.730 "read": true, 00:08:13.730 "write": true, 00:08:13.730 "unmap": true, 00:08:13.730 "flush": true, 00:08:13.730 "reset": true, 00:08:13.730 "nvme_admin": false, 00:08:13.730 "nvme_io": false, 00:08:13.730 "nvme_io_md": false, 00:08:13.730 "write_zeroes": true, 00:08:13.730 "zcopy": false, 00:08:13.730 "get_zone_info": false, 00:08:13.730 "zone_management": false, 00:08:13.730 "zone_append": false, 00:08:13.730 "compare": false, 00:08:13.730 "compare_and_write": false, 00:08:13.730 "abort": false, 00:08:13.730 "seek_hole": false, 00:08:13.730 "seek_data": false, 00:08:13.730 "copy": false, 00:08:13.730 "nvme_iov_md": false 00:08:13.730 }, 00:08:13.730 "memory_domains": [ 00:08:13.730 { 00:08:13.730 "dma_device_id": "system", 00:08:13.730 "dma_device_type": 1 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.730 "dma_device_type": 2 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "dma_device_id": "system", 00:08:13.730 "dma_device_type": 1 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.730 "dma_device_type": 2 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "dma_device_id": "system", 00:08:13.730 "dma_device_type": 1 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.730 "dma_device_type": 2 00:08:13.730 } 00:08:13.730 ], 00:08:13.730 "driver_specific": { 00:08:13.730 "raid": { 00:08:13.730 "uuid": "9fdc22f5-219b-436a-affa-820ea4171d00", 00:08:13.730 "strip_size_kb": 64, 00:08:13.730 "state": "online", 00:08:13.730 "raid_level": "raid0", 00:08:13.730 "superblock": false, 00:08:13.730 "num_base_bdevs": 3, 00:08:13.730 "num_base_bdevs_discovered": 3, 00:08:13.730 "num_base_bdevs_operational": 3, 00:08:13.730 "base_bdevs_list": [ 00:08:13.730 { 00:08:13.730 "name": "BaseBdev1", 00:08:13.730 "uuid": "d2a091f1-a6b4-4ef0-a3a4-e9b993c4d5d0", 00:08:13.730 "is_configured": true, 00:08:13.730 "data_offset": 0, 00:08:13.730 "data_size": 65536 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "name": "BaseBdev2", 00:08:13.730 "uuid": "4c7a0749-a669-4617-9d29-4ba213f1a769", 00:08:13.730 "is_configured": true, 00:08:13.730 "data_offset": 0, 00:08:13.730 "data_size": 65536 00:08:13.730 }, 00:08:13.730 { 00:08:13.730 "name": "BaseBdev3", 00:08:13.730 "uuid": "c2a7d352-cd98-499e-b12c-cf3db01b1702", 00:08:13.730 "is_configured": true, 00:08:13.730 "data_offset": 0, 00:08:13.730 "data_size": 65536 00:08:13.730 } 00:08:13.730 ] 00:08:13.730 } 00:08:13.730 } 00:08:13.730 }' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.730 BaseBdev2 00:08:13.730 BaseBdev3' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.730 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.991 [2024-12-12 19:36:56.598969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.991 [2024-12-12 19:36:56.599043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.991 [2024-12-12 19:36:56.599118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.991 "name": "Existed_Raid", 00:08:13.991 "uuid": "9fdc22f5-219b-436a-affa-820ea4171d00", 00:08:13.991 "strip_size_kb": 64, 00:08:13.991 "state": "offline", 00:08:13.991 "raid_level": "raid0", 00:08:13.991 "superblock": false, 00:08:13.991 "num_base_bdevs": 3, 00:08:13.991 "num_base_bdevs_discovered": 2, 00:08:13.991 "num_base_bdevs_operational": 2, 00:08:13.991 "base_bdevs_list": [ 00:08:13.991 { 00:08:13.991 "name": null, 00:08:13.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.991 "is_configured": false, 00:08:13.991 "data_offset": 0, 00:08:13.991 "data_size": 65536 00:08:13.991 }, 00:08:13.991 { 00:08:13.991 "name": "BaseBdev2", 00:08:13.991 "uuid": "4c7a0749-a669-4617-9d29-4ba213f1a769", 00:08:13.991 "is_configured": true, 00:08:13.991 "data_offset": 0, 00:08:13.991 "data_size": 65536 00:08:13.991 }, 00:08:13.991 { 00:08:13.991 "name": "BaseBdev3", 00:08:13.991 "uuid": "c2a7d352-cd98-499e-b12c-cf3db01b1702", 00:08:13.991 "is_configured": true, 00:08:13.991 "data_offset": 0, 00:08:13.991 "data_size": 65536 00:08:13.991 } 00:08:13.991 ] 00:08:13.991 }' 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.991 19:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.561 [2024-12-12 19:36:57.159685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.561 [2024-12-12 19:36:57.299853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.561 [2024-12-12 19:36:57.299954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.561 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.821 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 BaseBdev2 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 [ 00:08:14.822 { 00:08:14.822 "name": "BaseBdev2", 00:08:14.822 "aliases": [ 00:08:14.822 "86422579-ece9-471e-87f6-218d1b487a10" 00:08:14.822 ], 00:08:14.822 "product_name": "Malloc disk", 00:08:14.822 "block_size": 512, 00:08:14.822 "num_blocks": 65536, 00:08:14.822 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:14.822 "assigned_rate_limits": { 00:08:14.822 "rw_ios_per_sec": 0, 00:08:14.822 "rw_mbytes_per_sec": 0, 00:08:14.822 "r_mbytes_per_sec": 0, 00:08:14.822 "w_mbytes_per_sec": 0 00:08:14.822 }, 00:08:14.822 "claimed": false, 00:08:14.822 "zoned": false, 00:08:14.822 "supported_io_types": { 00:08:14.822 "read": true, 00:08:14.822 "write": true, 00:08:14.822 "unmap": true, 00:08:14.822 "flush": true, 00:08:14.822 "reset": true, 00:08:14.822 "nvme_admin": false, 00:08:14.822 "nvme_io": false, 00:08:14.822 "nvme_io_md": false, 00:08:14.822 "write_zeroes": true, 00:08:14.822 "zcopy": true, 00:08:14.822 "get_zone_info": false, 00:08:14.822 "zone_management": false, 00:08:14.822 "zone_append": false, 00:08:14.822 "compare": false, 00:08:14.822 "compare_and_write": false, 00:08:14.822 "abort": true, 00:08:14.822 "seek_hole": false, 00:08:14.822 "seek_data": false, 00:08:14.822 "copy": true, 00:08:14.822 "nvme_iov_md": false 00:08:14.822 }, 00:08:14.822 "memory_domains": [ 00:08:14.822 { 00:08:14.822 "dma_device_id": "system", 00:08:14.822 "dma_device_type": 1 00:08:14.822 }, 00:08:14.822 { 00:08:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.822 "dma_device_type": 2 00:08:14.822 } 00:08:14.822 ], 00:08:14.822 "driver_specific": {} 00:08:14.822 } 00:08:14.822 ] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 BaseBdev3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 [ 00:08:14.822 { 00:08:14.822 "name": "BaseBdev3", 00:08:14.822 "aliases": [ 00:08:14.822 "f12241b7-a811-4d02-a25e-9752d3e23269" 00:08:14.822 ], 00:08:14.822 "product_name": "Malloc disk", 00:08:14.822 "block_size": 512, 00:08:14.822 "num_blocks": 65536, 00:08:14.822 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:14.822 "assigned_rate_limits": { 00:08:14.822 "rw_ios_per_sec": 0, 00:08:14.822 "rw_mbytes_per_sec": 0, 00:08:14.822 "r_mbytes_per_sec": 0, 00:08:14.822 "w_mbytes_per_sec": 0 00:08:14.822 }, 00:08:14.822 "claimed": false, 00:08:14.822 "zoned": false, 00:08:14.822 "supported_io_types": { 00:08:14.822 "read": true, 00:08:14.822 "write": true, 00:08:14.822 "unmap": true, 00:08:14.822 "flush": true, 00:08:14.822 "reset": true, 00:08:14.822 "nvme_admin": false, 00:08:14.822 "nvme_io": false, 00:08:14.822 "nvme_io_md": false, 00:08:14.822 "write_zeroes": true, 00:08:14.822 "zcopy": true, 00:08:14.822 "get_zone_info": false, 00:08:14.822 "zone_management": false, 00:08:14.822 "zone_append": false, 00:08:14.822 "compare": false, 00:08:14.822 "compare_and_write": false, 00:08:14.822 "abort": true, 00:08:14.822 "seek_hole": false, 00:08:14.822 "seek_data": false, 00:08:14.822 "copy": true, 00:08:14.822 "nvme_iov_md": false 00:08:14.822 }, 00:08:14.822 "memory_domains": [ 00:08:14.822 { 00:08:14.822 "dma_device_id": "system", 00:08:14.822 "dma_device_type": 1 00:08:14.822 }, 00:08:14.822 { 00:08:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.822 "dma_device_type": 2 00:08:14.822 } 00:08:14.822 ], 00:08:14.822 "driver_specific": {} 00:08:14.822 } 00:08:14.822 ] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 [2024-12-12 19:36:57.613680] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.822 [2024-12-12 19:36:57.613767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.822 [2024-12-12 19:36:57.613811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.822 [2024-12-12 19:36:57.615579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.822 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.823 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.823 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.823 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.823 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.823 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.082 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.082 "name": "Existed_Raid", 00:08:15.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.082 "strip_size_kb": 64, 00:08:15.082 "state": "configuring", 00:08:15.082 "raid_level": "raid0", 00:08:15.082 "superblock": false, 00:08:15.082 "num_base_bdevs": 3, 00:08:15.082 "num_base_bdevs_discovered": 2, 00:08:15.082 "num_base_bdevs_operational": 3, 00:08:15.082 "base_bdevs_list": [ 00:08:15.082 { 00:08:15.082 "name": "BaseBdev1", 00:08:15.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.082 "is_configured": false, 00:08:15.082 "data_offset": 0, 00:08:15.082 "data_size": 0 00:08:15.082 }, 00:08:15.082 { 00:08:15.082 "name": "BaseBdev2", 00:08:15.082 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:15.082 "is_configured": true, 00:08:15.082 "data_offset": 0, 00:08:15.082 "data_size": 65536 00:08:15.082 }, 00:08:15.082 { 00:08:15.082 "name": "BaseBdev3", 00:08:15.082 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:15.082 "is_configured": true, 00:08:15.082 "data_offset": 0, 00:08:15.082 "data_size": 65536 00:08:15.082 } 00:08:15.082 ] 00:08:15.082 }' 00:08:15.082 19:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.082 19:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.342 [2024-12-12 19:36:58.061047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.342 "name": "Existed_Raid", 00:08:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.342 "strip_size_kb": 64, 00:08:15.342 "state": "configuring", 00:08:15.342 "raid_level": "raid0", 00:08:15.342 "superblock": false, 00:08:15.342 "num_base_bdevs": 3, 00:08:15.342 "num_base_bdevs_discovered": 1, 00:08:15.342 "num_base_bdevs_operational": 3, 00:08:15.342 "base_bdevs_list": [ 00:08:15.342 { 00:08:15.342 "name": "BaseBdev1", 00:08:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.342 "is_configured": false, 00:08:15.342 "data_offset": 0, 00:08:15.342 "data_size": 0 00:08:15.342 }, 00:08:15.342 { 00:08:15.342 "name": null, 00:08:15.342 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:15.342 "is_configured": false, 00:08:15.342 "data_offset": 0, 00:08:15.342 "data_size": 65536 00:08:15.342 }, 00:08:15.342 { 00:08:15.342 "name": "BaseBdev3", 00:08:15.342 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:15.342 "is_configured": true, 00:08:15.342 "data_offset": 0, 00:08:15.342 "data_size": 65536 00:08:15.342 } 00:08:15.342 ] 00:08:15.342 }' 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.342 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 [2024-12-12 19:36:58.687410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.910 BaseBdev1 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.910 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 [ 00:08:15.910 { 00:08:15.910 "name": "BaseBdev1", 00:08:15.910 "aliases": [ 00:08:15.910 "23bd7f3f-1b4f-4f93-8656-dbd861a6c466" 00:08:15.910 ], 00:08:15.910 "product_name": "Malloc disk", 00:08:15.910 "block_size": 512, 00:08:15.910 "num_blocks": 65536, 00:08:15.910 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:15.910 "assigned_rate_limits": { 00:08:15.910 "rw_ios_per_sec": 0, 00:08:15.911 "rw_mbytes_per_sec": 0, 00:08:15.911 "r_mbytes_per_sec": 0, 00:08:15.911 "w_mbytes_per_sec": 0 00:08:15.911 }, 00:08:15.911 "claimed": true, 00:08:15.911 "claim_type": "exclusive_write", 00:08:15.911 "zoned": false, 00:08:15.911 "supported_io_types": { 00:08:15.911 "read": true, 00:08:15.911 "write": true, 00:08:15.911 "unmap": true, 00:08:15.911 "flush": true, 00:08:15.911 "reset": true, 00:08:15.911 "nvme_admin": false, 00:08:15.911 "nvme_io": false, 00:08:15.911 "nvme_io_md": false, 00:08:15.911 "write_zeroes": true, 00:08:15.911 "zcopy": true, 00:08:15.911 "get_zone_info": false, 00:08:15.911 "zone_management": false, 00:08:15.911 "zone_append": false, 00:08:15.911 "compare": false, 00:08:15.911 "compare_and_write": false, 00:08:15.911 "abort": true, 00:08:15.911 "seek_hole": false, 00:08:15.911 "seek_data": false, 00:08:15.911 "copy": true, 00:08:15.911 "nvme_iov_md": false 00:08:15.911 }, 00:08:15.911 "memory_domains": [ 00:08:15.911 { 00:08:15.911 "dma_device_id": "system", 00:08:15.911 "dma_device_type": 1 00:08:15.911 }, 00:08:15.911 { 00:08:15.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.911 "dma_device_type": 2 00:08:15.911 } 00:08:15.911 ], 00:08:15.911 "driver_specific": {} 00:08:15.911 } 00:08:15.911 ] 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.911 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.170 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.170 "name": "Existed_Raid", 00:08:16.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.170 "strip_size_kb": 64, 00:08:16.170 "state": "configuring", 00:08:16.170 "raid_level": "raid0", 00:08:16.170 "superblock": false, 00:08:16.170 "num_base_bdevs": 3, 00:08:16.170 "num_base_bdevs_discovered": 2, 00:08:16.170 "num_base_bdevs_operational": 3, 00:08:16.170 "base_bdevs_list": [ 00:08:16.170 { 00:08:16.170 "name": "BaseBdev1", 00:08:16.170 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:16.170 "is_configured": true, 00:08:16.170 "data_offset": 0, 00:08:16.170 "data_size": 65536 00:08:16.170 }, 00:08:16.170 { 00:08:16.170 "name": null, 00:08:16.170 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:16.170 "is_configured": false, 00:08:16.170 "data_offset": 0, 00:08:16.170 "data_size": 65536 00:08:16.170 }, 00:08:16.170 { 00:08:16.170 "name": "BaseBdev3", 00:08:16.170 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:16.170 "is_configured": true, 00:08:16.170 "data_offset": 0, 00:08:16.170 "data_size": 65536 00:08:16.170 } 00:08:16.170 ] 00:08:16.170 }' 00:08:16.170 19:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.170 19:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.429 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 [2024-12-12 19:36:59.274531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.689 "name": "Existed_Raid", 00:08:16.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.689 "strip_size_kb": 64, 00:08:16.689 "state": "configuring", 00:08:16.689 "raid_level": "raid0", 00:08:16.689 "superblock": false, 00:08:16.689 "num_base_bdevs": 3, 00:08:16.689 "num_base_bdevs_discovered": 1, 00:08:16.689 "num_base_bdevs_operational": 3, 00:08:16.689 "base_bdevs_list": [ 00:08:16.689 { 00:08:16.689 "name": "BaseBdev1", 00:08:16.689 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:16.689 "is_configured": true, 00:08:16.689 "data_offset": 0, 00:08:16.689 "data_size": 65536 00:08:16.689 }, 00:08:16.689 { 00:08:16.689 "name": null, 00:08:16.689 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:16.689 "is_configured": false, 00:08:16.689 "data_offset": 0, 00:08:16.689 "data_size": 65536 00:08:16.689 }, 00:08:16.689 { 00:08:16.689 "name": null, 00:08:16.689 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:16.689 "is_configured": false, 00:08:16.689 "data_offset": 0, 00:08:16.689 "data_size": 65536 00:08:16.689 } 00:08:16.689 ] 00:08:16.689 }' 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.689 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.948 [2024-12-12 19:36:59.705780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.948 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.949 "name": "Existed_Raid", 00:08:16.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.949 "strip_size_kb": 64, 00:08:16.949 "state": "configuring", 00:08:16.949 "raid_level": "raid0", 00:08:16.949 "superblock": false, 00:08:16.949 "num_base_bdevs": 3, 00:08:16.949 "num_base_bdevs_discovered": 2, 00:08:16.949 "num_base_bdevs_operational": 3, 00:08:16.949 "base_bdevs_list": [ 00:08:16.949 { 00:08:16.949 "name": "BaseBdev1", 00:08:16.949 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:16.949 "is_configured": true, 00:08:16.949 "data_offset": 0, 00:08:16.949 "data_size": 65536 00:08:16.949 }, 00:08:16.949 { 00:08:16.949 "name": null, 00:08:16.949 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:16.949 "is_configured": false, 00:08:16.949 "data_offset": 0, 00:08:16.949 "data_size": 65536 00:08:16.949 }, 00:08:16.949 { 00:08:16.949 "name": "BaseBdev3", 00:08:16.949 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:16.949 "is_configured": true, 00:08:16.949 "data_offset": 0, 00:08:16.949 "data_size": 65536 00:08:16.949 } 00:08:16.949 ] 00:08:16.949 }' 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.949 19:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.516 [2024-12-12 19:37:00.141538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.516 "name": "Existed_Raid", 00:08:17.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.516 "strip_size_kb": 64, 00:08:17.516 "state": "configuring", 00:08:17.516 "raid_level": "raid0", 00:08:17.516 "superblock": false, 00:08:17.516 "num_base_bdevs": 3, 00:08:17.516 "num_base_bdevs_discovered": 1, 00:08:17.516 "num_base_bdevs_operational": 3, 00:08:17.516 "base_bdevs_list": [ 00:08:17.516 { 00:08:17.516 "name": null, 00:08:17.516 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:17.516 "is_configured": false, 00:08:17.516 "data_offset": 0, 00:08:17.516 "data_size": 65536 00:08:17.516 }, 00:08:17.516 { 00:08:17.516 "name": null, 00:08:17.516 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:17.516 "is_configured": false, 00:08:17.516 "data_offset": 0, 00:08:17.516 "data_size": 65536 00:08:17.516 }, 00:08:17.516 { 00:08:17.516 "name": "BaseBdev3", 00:08:17.516 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:17.516 "is_configured": true, 00:08:17.516 "data_offset": 0, 00:08:17.516 "data_size": 65536 00:08:17.516 } 00:08:17.516 ] 00:08:17.516 }' 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.516 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.143 [2024-12-12 19:37:00.681732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.143 "name": "Existed_Raid", 00:08:18.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.143 "strip_size_kb": 64, 00:08:18.143 "state": "configuring", 00:08:18.143 "raid_level": "raid0", 00:08:18.143 "superblock": false, 00:08:18.143 "num_base_bdevs": 3, 00:08:18.143 "num_base_bdevs_discovered": 2, 00:08:18.143 "num_base_bdevs_operational": 3, 00:08:18.143 "base_bdevs_list": [ 00:08:18.143 { 00:08:18.143 "name": null, 00:08:18.143 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:18.143 "is_configured": false, 00:08:18.143 "data_offset": 0, 00:08:18.143 "data_size": 65536 00:08:18.143 }, 00:08:18.143 { 00:08:18.143 "name": "BaseBdev2", 00:08:18.143 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:18.143 "is_configured": true, 00:08:18.143 "data_offset": 0, 00:08:18.143 "data_size": 65536 00:08:18.143 }, 00:08:18.143 { 00:08:18.143 "name": "BaseBdev3", 00:08:18.143 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:18.143 "is_configured": true, 00:08:18.143 "data_offset": 0, 00:08:18.143 "data_size": 65536 00:08:18.143 } 00:08:18.143 ] 00:08:18.143 }' 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.143 19:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.401 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.401 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.401 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.401 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23bd7f3f-1b4f-4f93-8656-dbd861a6c466 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.402 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.402 [2024-12-12 19:37:01.245330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:18.402 [2024-12-12 19:37:01.245483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:18.402 [2024-12-12 19:37:01.245513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:18.402 [2024-12-12 19:37:01.245874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.661 [2024-12-12 19:37:01.246096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:18.661 [2024-12-12 19:37:01.246142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:18.661 [2024-12-12 19:37:01.246482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.661 NewBaseBdev 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.661 [ 00:08:18.661 { 00:08:18.661 "name": "NewBaseBdev", 00:08:18.661 "aliases": [ 00:08:18.661 "23bd7f3f-1b4f-4f93-8656-dbd861a6c466" 00:08:18.661 ], 00:08:18.661 "product_name": "Malloc disk", 00:08:18.661 "block_size": 512, 00:08:18.661 "num_blocks": 65536, 00:08:18.661 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:18.661 "assigned_rate_limits": { 00:08:18.661 "rw_ios_per_sec": 0, 00:08:18.661 "rw_mbytes_per_sec": 0, 00:08:18.661 "r_mbytes_per_sec": 0, 00:08:18.661 "w_mbytes_per_sec": 0 00:08:18.661 }, 00:08:18.661 "claimed": true, 00:08:18.661 "claim_type": "exclusive_write", 00:08:18.661 "zoned": false, 00:08:18.661 "supported_io_types": { 00:08:18.661 "read": true, 00:08:18.661 "write": true, 00:08:18.661 "unmap": true, 00:08:18.661 "flush": true, 00:08:18.661 "reset": true, 00:08:18.661 "nvme_admin": false, 00:08:18.661 "nvme_io": false, 00:08:18.661 "nvme_io_md": false, 00:08:18.661 "write_zeroes": true, 00:08:18.661 "zcopy": true, 00:08:18.661 "get_zone_info": false, 00:08:18.661 "zone_management": false, 00:08:18.661 "zone_append": false, 00:08:18.661 "compare": false, 00:08:18.661 "compare_and_write": false, 00:08:18.661 "abort": true, 00:08:18.661 "seek_hole": false, 00:08:18.661 "seek_data": false, 00:08:18.661 "copy": true, 00:08:18.661 "nvme_iov_md": false 00:08:18.661 }, 00:08:18.661 "memory_domains": [ 00:08:18.661 { 00:08:18.661 "dma_device_id": "system", 00:08:18.661 "dma_device_type": 1 00:08:18.661 }, 00:08:18.661 { 00:08:18.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.661 "dma_device_type": 2 00:08:18.661 } 00:08:18.661 ], 00:08:18.661 "driver_specific": {} 00:08:18.661 } 00:08:18.661 ] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.661 "name": "Existed_Raid", 00:08:18.661 "uuid": "86204f3f-eeb4-4c25-942a-7f6642d98482", 00:08:18.661 "strip_size_kb": 64, 00:08:18.661 "state": "online", 00:08:18.661 "raid_level": "raid0", 00:08:18.661 "superblock": false, 00:08:18.661 "num_base_bdevs": 3, 00:08:18.661 "num_base_bdevs_discovered": 3, 00:08:18.661 "num_base_bdevs_operational": 3, 00:08:18.661 "base_bdevs_list": [ 00:08:18.661 { 00:08:18.661 "name": "NewBaseBdev", 00:08:18.661 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:18.661 "is_configured": true, 00:08:18.661 "data_offset": 0, 00:08:18.661 "data_size": 65536 00:08:18.661 }, 00:08:18.661 { 00:08:18.661 "name": "BaseBdev2", 00:08:18.661 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:18.661 "is_configured": true, 00:08:18.661 "data_offset": 0, 00:08:18.661 "data_size": 65536 00:08:18.661 }, 00:08:18.661 { 00:08:18.661 "name": "BaseBdev3", 00:08:18.661 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:18.661 "is_configured": true, 00:08:18.661 "data_offset": 0, 00:08:18.661 "data_size": 65536 00:08:18.661 } 00:08:18.661 ] 00:08:18.661 }' 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.661 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.920 [2024-12-12 19:37:01.736939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.920 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.181 "name": "Existed_Raid", 00:08:19.181 "aliases": [ 00:08:19.181 "86204f3f-eeb4-4c25-942a-7f6642d98482" 00:08:19.181 ], 00:08:19.181 "product_name": "Raid Volume", 00:08:19.181 "block_size": 512, 00:08:19.181 "num_blocks": 196608, 00:08:19.181 "uuid": "86204f3f-eeb4-4c25-942a-7f6642d98482", 00:08:19.181 "assigned_rate_limits": { 00:08:19.181 "rw_ios_per_sec": 0, 00:08:19.181 "rw_mbytes_per_sec": 0, 00:08:19.181 "r_mbytes_per_sec": 0, 00:08:19.181 "w_mbytes_per_sec": 0 00:08:19.181 }, 00:08:19.181 "claimed": false, 00:08:19.181 "zoned": false, 00:08:19.181 "supported_io_types": { 00:08:19.181 "read": true, 00:08:19.181 "write": true, 00:08:19.181 "unmap": true, 00:08:19.181 "flush": true, 00:08:19.181 "reset": true, 00:08:19.181 "nvme_admin": false, 00:08:19.181 "nvme_io": false, 00:08:19.181 "nvme_io_md": false, 00:08:19.181 "write_zeroes": true, 00:08:19.181 "zcopy": false, 00:08:19.181 "get_zone_info": false, 00:08:19.181 "zone_management": false, 00:08:19.181 "zone_append": false, 00:08:19.181 "compare": false, 00:08:19.181 "compare_and_write": false, 00:08:19.181 "abort": false, 00:08:19.181 "seek_hole": false, 00:08:19.181 "seek_data": false, 00:08:19.181 "copy": false, 00:08:19.181 "nvme_iov_md": false 00:08:19.181 }, 00:08:19.181 "memory_domains": [ 00:08:19.181 { 00:08:19.181 "dma_device_id": "system", 00:08:19.181 "dma_device_type": 1 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.181 "dma_device_type": 2 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "dma_device_id": "system", 00:08:19.181 "dma_device_type": 1 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.181 "dma_device_type": 2 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "dma_device_id": "system", 00:08:19.181 "dma_device_type": 1 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.181 "dma_device_type": 2 00:08:19.181 } 00:08:19.181 ], 00:08:19.181 "driver_specific": { 00:08:19.181 "raid": { 00:08:19.181 "uuid": "86204f3f-eeb4-4c25-942a-7f6642d98482", 00:08:19.181 "strip_size_kb": 64, 00:08:19.181 "state": "online", 00:08:19.181 "raid_level": "raid0", 00:08:19.181 "superblock": false, 00:08:19.181 "num_base_bdevs": 3, 00:08:19.181 "num_base_bdevs_discovered": 3, 00:08:19.181 "num_base_bdevs_operational": 3, 00:08:19.181 "base_bdevs_list": [ 00:08:19.181 { 00:08:19.181 "name": "NewBaseBdev", 00:08:19.181 "uuid": "23bd7f3f-1b4f-4f93-8656-dbd861a6c466", 00:08:19.181 "is_configured": true, 00:08:19.181 "data_offset": 0, 00:08:19.181 "data_size": 65536 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "name": "BaseBdev2", 00:08:19.181 "uuid": "86422579-ece9-471e-87f6-218d1b487a10", 00:08:19.181 "is_configured": true, 00:08:19.181 "data_offset": 0, 00:08:19.181 "data_size": 65536 00:08:19.181 }, 00:08:19.181 { 00:08:19.181 "name": "BaseBdev3", 00:08:19.181 "uuid": "f12241b7-a811-4d02-a25e-9752d3e23269", 00:08:19.181 "is_configured": true, 00:08:19.181 "data_offset": 0, 00:08:19.181 "data_size": 65536 00:08:19.181 } 00:08:19.181 ] 00:08:19.181 } 00:08:19.181 } 00:08:19.181 }' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:19.181 BaseBdev2 00:08:19.181 BaseBdev3' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.181 19:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.181 19:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.181 19:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.181 19:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.181 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.181 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.181 [2024-12-12 19:37:02.016142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.181 [2024-12-12 19:37:02.016170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.181 [2024-12-12 19:37:02.016254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.181 [2024-12-12 19:37:02.016311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.182 [2024-12-12 19:37:02.016322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:19.182 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 19:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65511 00:08:19.182 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65511 ']' 00:08:19.182 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65511 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65511 00:08:19.442 killing process with pid 65511 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65511' 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65511 00:08:19.442 [2024-12-12 19:37:02.063411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.442 19:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65511 00:08:19.701 [2024-12-12 19:37:02.354106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.080 00:08:21.080 real 0m10.558s 00:08:21.080 user 0m16.806s 00:08:21.080 sys 0m1.709s 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.080 ************************************ 00:08:21.080 END TEST raid_state_function_test 00:08:21.080 ************************************ 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 19:37:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:21.080 19:37:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.080 19:37:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.080 19:37:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 ************************************ 00:08:21.080 START TEST raid_state_function_test_sb 00:08:21.080 ************************************ 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66132 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66132' 00:08:21.080 Process raid pid: 66132 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66132 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66132 ']' 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.080 19:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 [2024-12-12 19:37:03.645327] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:21.080 [2024-12-12 19:37:03.645441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.080 [2024-12-12 19:37:03.823656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.339 [2024-12-12 19:37:03.947091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.339 [2024-12-12 19:37:04.157454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.339 [2024-12-12 19:37:04.157495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.907 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.907 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:21.907 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.907 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.907 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.907 [2024-12-12 19:37:04.473910] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.908 [2024-12-12 19:37:04.474034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.908 [2024-12-12 19:37:04.474051] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.908 [2024-12-12 19:37:04.474076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.908 [2024-12-12 19:37:04.474084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.908 [2024-12-12 19:37:04.474094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.908 "name": "Existed_Raid", 00:08:21.908 "uuid": "25eb7811-62ba-472e-adce-9ad513850a8e", 00:08:21.908 "strip_size_kb": 64, 00:08:21.908 "state": "configuring", 00:08:21.908 "raid_level": "raid0", 00:08:21.908 "superblock": true, 00:08:21.908 "num_base_bdevs": 3, 00:08:21.908 "num_base_bdevs_discovered": 0, 00:08:21.908 "num_base_bdevs_operational": 3, 00:08:21.908 "base_bdevs_list": [ 00:08:21.908 { 00:08:21.908 "name": "BaseBdev1", 00:08:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.908 "is_configured": false, 00:08:21.908 "data_offset": 0, 00:08:21.908 "data_size": 0 00:08:21.908 }, 00:08:21.908 { 00:08:21.908 "name": "BaseBdev2", 00:08:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.908 "is_configured": false, 00:08:21.908 "data_offset": 0, 00:08:21.908 "data_size": 0 00:08:21.908 }, 00:08:21.908 { 00:08:21.908 "name": "BaseBdev3", 00:08:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.908 "is_configured": false, 00:08:21.908 "data_offset": 0, 00:08:21.908 "data_size": 0 00:08:21.908 } 00:08:21.908 ] 00:08:21.908 }' 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.908 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 [2024-12-12 19:37:04.929068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.166 [2024-12-12 19:37:04.929167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 [2024-12-12 19:37:04.941042] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.166 [2024-12-12 19:37:04.941121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.166 [2024-12-12 19:37:04.941148] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.166 [2024-12-12 19:37:04.941169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.166 [2024-12-12 19:37:04.941187] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.166 [2024-12-12 19:37:04.941207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.166 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.167 [2024-12-12 19:37:04.989683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.167 BaseBdev1 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.167 19:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.167 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.167 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.167 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.167 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.425 [ 00:08:22.425 { 00:08:22.425 "name": "BaseBdev1", 00:08:22.425 "aliases": [ 00:08:22.425 "e06fb8ac-55c2-4233-8450-7f25e652b276" 00:08:22.425 ], 00:08:22.425 "product_name": "Malloc disk", 00:08:22.425 "block_size": 512, 00:08:22.425 "num_blocks": 65536, 00:08:22.425 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:22.425 "assigned_rate_limits": { 00:08:22.425 "rw_ios_per_sec": 0, 00:08:22.425 "rw_mbytes_per_sec": 0, 00:08:22.425 "r_mbytes_per_sec": 0, 00:08:22.426 "w_mbytes_per_sec": 0 00:08:22.426 }, 00:08:22.426 "claimed": true, 00:08:22.426 "claim_type": "exclusive_write", 00:08:22.426 "zoned": false, 00:08:22.426 "supported_io_types": { 00:08:22.426 "read": true, 00:08:22.426 "write": true, 00:08:22.426 "unmap": true, 00:08:22.426 "flush": true, 00:08:22.426 "reset": true, 00:08:22.426 "nvme_admin": false, 00:08:22.426 "nvme_io": false, 00:08:22.426 "nvme_io_md": false, 00:08:22.426 "write_zeroes": true, 00:08:22.426 "zcopy": true, 00:08:22.426 "get_zone_info": false, 00:08:22.426 "zone_management": false, 00:08:22.426 "zone_append": false, 00:08:22.426 "compare": false, 00:08:22.426 "compare_and_write": false, 00:08:22.426 "abort": true, 00:08:22.426 "seek_hole": false, 00:08:22.426 "seek_data": false, 00:08:22.426 "copy": true, 00:08:22.426 "nvme_iov_md": false 00:08:22.426 }, 00:08:22.426 "memory_domains": [ 00:08:22.426 { 00:08:22.426 "dma_device_id": "system", 00:08:22.426 "dma_device_type": 1 00:08:22.426 }, 00:08:22.426 { 00:08:22.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.426 "dma_device_type": 2 00:08:22.426 } 00:08:22.426 ], 00:08:22.426 "driver_specific": {} 00:08:22.426 } 00:08:22.426 ] 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.426 "name": "Existed_Raid", 00:08:22.426 "uuid": "26111501-923f-481a-8abe-34b70c3ed014", 00:08:22.426 "strip_size_kb": 64, 00:08:22.426 "state": "configuring", 00:08:22.426 "raid_level": "raid0", 00:08:22.426 "superblock": true, 00:08:22.426 "num_base_bdevs": 3, 00:08:22.426 "num_base_bdevs_discovered": 1, 00:08:22.426 "num_base_bdevs_operational": 3, 00:08:22.426 "base_bdevs_list": [ 00:08:22.426 { 00:08:22.426 "name": "BaseBdev1", 00:08:22.426 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:22.426 "is_configured": true, 00:08:22.426 "data_offset": 2048, 00:08:22.426 "data_size": 63488 00:08:22.426 }, 00:08:22.426 { 00:08:22.426 "name": "BaseBdev2", 00:08:22.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.426 "is_configured": false, 00:08:22.426 "data_offset": 0, 00:08:22.426 "data_size": 0 00:08:22.426 }, 00:08:22.426 { 00:08:22.426 "name": "BaseBdev3", 00:08:22.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.426 "is_configured": false, 00:08:22.426 "data_offset": 0, 00:08:22.426 "data_size": 0 00:08:22.426 } 00:08:22.426 ] 00:08:22.426 }' 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.426 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.686 [2024-12-12 19:37:05.481030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.686 [2024-12-12 19:37:05.481137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.686 [2024-12-12 19:37:05.493074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.686 [2024-12-12 19:37:05.495075] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.686 [2024-12-12 19:37:05.495119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.686 [2024-12-12 19:37:05.495130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.686 [2024-12-12 19:37:05.495139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.686 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.687 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.947 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.947 "name": "Existed_Raid", 00:08:22.947 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:22.947 "strip_size_kb": 64, 00:08:22.947 "state": "configuring", 00:08:22.947 "raid_level": "raid0", 00:08:22.947 "superblock": true, 00:08:22.947 "num_base_bdevs": 3, 00:08:22.947 "num_base_bdevs_discovered": 1, 00:08:22.947 "num_base_bdevs_operational": 3, 00:08:22.947 "base_bdevs_list": [ 00:08:22.947 { 00:08:22.947 "name": "BaseBdev1", 00:08:22.947 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:22.947 "is_configured": true, 00:08:22.947 "data_offset": 2048, 00:08:22.947 "data_size": 63488 00:08:22.947 }, 00:08:22.947 { 00:08:22.947 "name": "BaseBdev2", 00:08:22.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.947 "is_configured": false, 00:08:22.947 "data_offset": 0, 00:08:22.947 "data_size": 0 00:08:22.947 }, 00:08:22.947 { 00:08:22.947 "name": "BaseBdev3", 00:08:22.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.947 "is_configured": false, 00:08:22.947 "data_offset": 0, 00:08:22.947 "data_size": 0 00:08:22.947 } 00:08:22.947 ] 00:08:22.947 }' 00:08:22.947 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.947 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 19:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.205 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.205 19:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 [2024-12-12 19:37:06.004833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.205 BaseBdev2 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.205 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 [ 00:08:23.205 { 00:08:23.205 "name": "BaseBdev2", 00:08:23.205 "aliases": [ 00:08:23.205 "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6" 00:08:23.205 ], 00:08:23.205 "product_name": "Malloc disk", 00:08:23.205 "block_size": 512, 00:08:23.205 "num_blocks": 65536, 00:08:23.205 "uuid": "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6", 00:08:23.205 "assigned_rate_limits": { 00:08:23.205 "rw_ios_per_sec": 0, 00:08:23.205 "rw_mbytes_per_sec": 0, 00:08:23.205 "r_mbytes_per_sec": 0, 00:08:23.205 "w_mbytes_per_sec": 0 00:08:23.205 }, 00:08:23.205 "claimed": true, 00:08:23.205 "claim_type": "exclusive_write", 00:08:23.205 "zoned": false, 00:08:23.205 "supported_io_types": { 00:08:23.205 "read": true, 00:08:23.205 "write": true, 00:08:23.206 "unmap": true, 00:08:23.206 "flush": true, 00:08:23.206 "reset": true, 00:08:23.206 "nvme_admin": false, 00:08:23.206 "nvme_io": false, 00:08:23.206 "nvme_io_md": false, 00:08:23.206 "write_zeroes": true, 00:08:23.206 "zcopy": true, 00:08:23.206 "get_zone_info": false, 00:08:23.206 "zone_management": false, 00:08:23.206 "zone_append": false, 00:08:23.206 "compare": false, 00:08:23.206 "compare_and_write": false, 00:08:23.206 "abort": true, 00:08:23.206 "seek_hole": false, 00:08:23.206 "seek_data": false, 00:08:23.206 "copy": true, 00:08:23.206 "nvme_iov_md": false 00:08:23.206 }, 00:08:23.206 "memory_domains": [ 00:08:23.206 { 00:08:23.206 "dma_device_id": "system", 00:08:23.206 "dma_device_type": 1 00:08:23.206 }, 00:08:23.206 { 00:08:23.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.206 "dma_device_type": 2 00:08:23.206 } 00:08:23.206 ], 00:08:23.206 "driver_specific": {} 00:08:23.206 } 00:08:23.206 ] 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.206 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.465 "name": "Existed_Raid", 00:08:23.465 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:23.465 "strip_size_kb": 64, 00:08:23.465 "state": "configuring", 00:08:23.465 "raid_level": "raid0", 00:08:23.465 "superblock": true, 00:08:23.465 "num_base_bdevs": 3, 00:08:23.465 "num_base_bdevs_discovered": 2, 00:08:23.465 "num_base_bdevs_operational": 3, 00:08:23.465 "base_bdevs_list": [ 00:08:23.465 { 00:08:23.465 "name": "BaseBdev1", 00:08:23.465 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:23.465 "is_configured": true, 00:08:23.465 "data_offset": 2048, 00:08:23.465 "data_size": 63488 00:08:23.465 }, 00:08:23.465 { 00:08:23.465 "name": "BaseBdev2", 00:08:23.465 "uuid": "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6", 00:08:23.465 "is_configured": true, 00:08:23.465 "data_offset": 2048, 00:08:23.465 "data_size": 63488 00:08:23.465 }, 00:08:23.465 { 00:08:23.465 "name": "BaseBdev3", 00:08:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.465 "is_configured": false, 00:08:23.465 "data_offset": 0, 00:08:23.465 "data_size": 0 00:08:23.465 } 00:08:23.465 ] 00:08:23.465 }' 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.465 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.724 [2024-12-12 19:37:06.538501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.724 [2024-12-12 19:37:06.538936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.724 [2024-12-12 19:37:06.538967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:23.724 [2024-12-12 19:37:06.539279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:23.724 [2024-12-12 19:37:06.539462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.724 [2024-12-12 19:37:06.539481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.724 BaseBdev3 00:08:23.724 [2024-12-12 19:37:06.539679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.724 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.724 [ 00:08:23.724 { 00:08:23.724 "name": "BaseBdev3", 00:08:23.724 "aliases": [ 00:08:23.724 "6ba46ef3-8d66-42b0-9747-6df03f69bd4a" 00:08:23.724 ], 00:08:23.724 "product_name": "Malloc disk", 00:08:23.724 "block_size": 512, 00:08:23.724 "num_blocks": 65536, 00:08:23.724 "uuid": "6ba46ef3-8d66-42b0-9747-6df03f69bd4a", 00:08:23.724 "assigned_rate_limits": { 00:08:23.724 "rw_ios_per_sec": 0, 00:08:23.724 "rw_mbytes_per_sec": 0, 00:08:23.724 "r_mbytes_per_sec": 0, 00:08:23.724 "w_mbytes_per_sec": 0 00:08:23.724 }, 00:08:23.724 "claimed": true, 00:08:23.724 "claim_type": "exclusive_write", 00:08:23.724 "zoned": false, 00:08:23.724 "supported_io_types": { 00:08:23.724 "read": true, 00:08:23.724 "write": true, 00:08:23.724 "unmap": true, 00:08:23.724 "flush": true, 00:08:23.724 "reset": true, 00:08:23.724 "nvme_admin": false, 00:08:23.724 "nvme_io": false, 00:08:23.724 "nvme_io_md": false, 00:08:23.724 "write_zeroes": true, 00:08:23.724 "zcopy": true, 00:08:23.724 "get_zone_info": false, 00:08:23.724 "zone_management": false, 00:08:23.724 "zone_append": false, 00:08:23.724 "compare": false, 00:08:23.724 "compare_and_write": false, 00:08:23.724 "abort": true, 00:08:23.724 "seek_hole": false, 00:08:23.724 "seek_data": false, 00:08:23.724 "copy": true, 00:08:23.724 "nvme_iov_md": false 00:08:23.724 }, 00:08:23.724 "memory_domains": [ 00:08:23.724 { 00:08:23.724 "dma_device_id": "system", 00:08:23.724 "dma_device_type": 1 00:08:23.724 }, 00:08:23.724 { 00:08:23.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.724 "dma_device_type": 2 00:08:23.724 } 00:08:23.724 ], 00:08:23.724 "driver_specific": {} 00:08:23.982 } 00:08:23.982 ] 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.982 "name": "Existed_Raid", 00:08:23.982 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:23.982 "strip_size_kb": 64, 00:08:23.982 "state": "online", 00:08:23.982 "raid_level": "raid0", 00:08:23.982 "superblock": true, 00:08:23.982 "num_base_bdevs": 3, 00:08:23.982 "num_base_bdevs_discovered": 3, 00:08:23.982 "num_base_bdevs_operational": 3, 00:08:23.982 "base_bdevs_list": [ 00:08:23.982 { 00:08:23.982 "name": "BaseBdev1", 00:08:23.982 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:23.982 "is_configured": true, 00:08:23.982 "data_offset": 2048, 00:08:23.982 "data_size": 63488 00:08:23.982 }, 00:08:23.982 { 00:08:23.982 "name": "BaseBdev2", 00:08:23.982 "uuid": "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6", 00:08:23.982 "is_configured": true, 00:08:23.982 "data_offset": 2048, 00:08:23.982 "data_size": 63488 00:08:23.982 }, 00:08:23.982 { 00:08:23.982 "name": "BaseBdev3", 00:08:23.982 "uuid": "6ba46ef3-8d66-42b0-9747-6df03f69bd4a", 00:08:23.982 "is_configured": true, 00:08:23.982 "data_offset": 2048, 00:08:23.982 "data_size": 63488 00:08:23.982 } 00:08:23.982 ] 00:08:23.982 }' 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.982 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.239 19:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 [2024-12-12 19:37:06.990245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.239 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.239 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.239 "name": "Existed_Raid", 00:08:24.239 "aliases": [ 00:08:24.239 "db7fab0d-9eaf-4108-b0fb-b7e431a93949" 00:08:24.239 ], 00:08:24.239 "product_name": "Raid Volume", 00:08:24.239 "block_size": 512, 00:08:24.239 "num_blocks": 190464, 00:08:24.239 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:24.239 "assigned_rate_limits": { 00:08:24.239 "rw_ios_per_sec": 0, 00:08:24.239 "rw_mbytes_per_sec": 0, 00:08:24.239 "r_mbytes_per_sec": 0, 00:08:24.239 "w_mbytes_per_sec": 0 00:08:24.239 }, 00:08:24.239 "claimed": false, 00:08:24.239 "zoned": false, 00:08:24.239 "supported_io_types": { 00:08:24.239 "read": true, 00:08:24.239 "write": true, 00:08:24.239 "unmap": true, 00:08:24.239 "flush": true, 00:08:24.239 "reset": true, 00:08:24.239 "nvme_admin": false, 00:08:24.239 "nvme_io": false, 00:08:24.239 "nvme_io_md": false, 00:08:24.239 "write_zeroes": true, 00:08:24.239 "zcopy": false, 00:08:24.239 "get_zone_info": false, 00:08:24.239 "zone_management": false, 00:08:24.239 "zone_append": false, 00:08:24.239 "compare": false, 00:08:24.239 "compare_and_write": false, 00:08:24.239 "abort": false, 00:08:24.239 "seek_hole": false, 00:08:24.239 "seek_data": false, 00:08:24.239 "copy": false, 00:08:24.239 "nvme_iov_md": false 00:08:24.239 }, 00:08:24.239 "memory_domains": [ 00:08:24.239 { 00:08:24.239 "dma_device_id": "system", 00:08:24.239 "dma_device_type": 1 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.239 "dma_device_type": 2 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "dma_device_id": "system", 00:08:24.239 "dma_device_type": 1 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.239 "dma_device_type": 2 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "dma_device_id": "system", 00:08:24.239 "dma_device_type": 1 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.239 "dma_device_type": 2 00:08:24.239 } 00:08:24.239 ], 00:08:24.239 "driver_specific": { 00:08:24.239 "raid": { 00:08:24.239 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:24.239 "strip_size_kb": 64, 00:08:24.239 "state": "online", 00:08:24.239 "raid_level": "raid0", 00:08:24.239 "superblock": true, 00:08:24.239 "num_base_bdevs": 3, 00:08:24.239 "num_base_bdevs_discovered": 3, 00:08:24.239 "num_base_bdevs_operational": 3, 00:08:24.239 "base_bdevs_list": [ 00:08:24.239 { 00:08:24.239 "name": "BaseBdev1", 00:08:24.239 "uuid": "e06fb8ac-55c2-4233-8450-7f25e652b276", 00:08:24.239 "is_configured": true, 00:08:24.239 "data_offset": 2048, 00:08:24.239 "data_size": 63488 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "name": "BaseBdev2", 00:08:24.239 "uuid": "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6", 00:08:24.239 "is_configured": true, 00:08:24.239 "data_offset": 2048, 00:08:24.239 "data_size": 63488 00:08:24.239 }, 00:08:24.239 { 00:08:24.239 "name": "BaseBdev3", 00:08:24.239 "uuid": "6ba46ef3-8d66-42b0-9747-6df03f69bd4a", 00:08:24.239 "is_configured": true, 00:08:24.239 "data_offset": 2048, 00:08:24.239 "data_size": 63488 00:08:24.239 } 00:08:24.239 ] 00:08:24.239 } 00:08:24.239 } 00:08:24.239 }' 00:08:24.239 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.239 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.239 BaseBdev2 00:08:24.239 BaseBdev3' 00:08:24.239 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 [2024-12-12 19:37:07.217759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.496 [2024-12-12 19:37:07.217793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.496 [2024-12-12 19:37:07.217854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.496 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.753 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.753 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.753 "name": "Existed_Raid", 00:08:24.753 "uuid": "db7fab0d-9eaf-4108-b0fb-b7e431a93949", 00:08:24.753 "strip_size_kb": 64, 00:08:24.753 "state": "offline", 00:08:24.753 "raid_level": "raid0", 00:08:24.753 "superblock": true, 00:08:24.753 "num_base_bdevs": 3, 00:08:24.753 "num_base_bdevs_discovered": 2, 00:08:24.753 "num_base_bdevs_operational": 2, 00:08:24.753 "base_bdevs_list": [ 00:08:24.753 { 00:08:24.753 "name": null, 00:08:24.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.754 "is_configured": false, 00:08:24.754 "data_offset": 0, 00:08:24.754 "data_size": 63488 00:08:24.754 }, 00:08:24.754 { 00:08:24.754 "name": "BaseBdev2", 00:08:24.754 "uuid": "5c80f71f-2d00-4163-b7c0-f6d583ea9ce6", 00:08:24.754 "is_configured": true, 00:08:24.754 "data_offset": 2048, 00:08:24.754 "data_size": 63488 00:08:24.754 }, 00:08:24.754 { 00:08:24.754 "name": "BaseBdev3", 00:08:24.754 "uuid": "6ba46ef3-8d66-42b0-9747-6df03f69bd4a", 00:08:24.754 "is_configured": true, 00:08:24.754 "data_offset": 2048, 00:08:24.754 "data_size": 63488 00:08:24.754 } 00:08:24.754 ] 00:08:24.754 }' 00:08:24.754 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.754 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.012 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.012 [2024-12-12 19:37:07.803311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.269 19:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.269 [2024-12-12 19:37:07.953794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.269 [2024-12-12 19:37:07.953852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.269 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.526 BaseBdev2 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.526 [ 00:08:25.526 { 00:08:25.526 "name": "BaseBdev2", 00:08:25.526 "aliases": [ 00:08:25.526 "9fed886a-11b0-45f4-be51-7ee10bfd8de6" 00:08:25.526 ], 00:08:25.526 "product_name": "Malloc disk", 00:08:25.526 "block_size": 512, 00:08:25.526 "num_blocks": 65536, 00:08:25.526 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:25.526 "assigned_rate_limits": { 00:08:25.526 "rw_ios_per_sec": 0, 00:08:25.526 "rw_mbytes_per_sec": 0, 00:08:25.526 "r_mbytes_per_sec": 0, 00:08:25.526 "w_mbytes_per_sec": 0 00:08:25.526 }, 00:08:25.526 "claimed": false, 00:08:25.526 "zoned": false, 00:08:25.526 "supported_io_types": { 00:08:25.526 "read": true, 00:08:25.526 "write": true, 00:08:25.526 "unmap": true, 00:08:25.526 "flush": true, 00:08:25.526 "reset": true, 00:08:25.526 "nvme_admin": false, 00:08:25.526 "nvme_io": false, 00:08:25.526 "nvme_io_md": false, 00:08:25.526 "write_zeroes": true, 00:08:25.526 "zcopy": true, 00:08:25.526 "get_zone_info": false, 00:08:25.526 "zone_management": false, 00:08:25.526 "zone_append": false, 00:08:25.526 "compare": false, 00:08:25.526 "compare_and_write": false, 00:08:25.526 "abort": true, 00:08:25.526 "seek_hole": false, 00:08:25.526 "seek_data": false, 00:08:25.526 "copy": true, 00:08:25.526 "nvme_iov_md": false 00:08:25.526 }, 00:08:25.526 "memory_domains": [ 00:08:25.526 { 00:08:25.526 "dma_device_id": "system", 00:08:25.526 "dma_device_type": 1 00:08:25.526 }, 00:08:25.526 { 00:08:25.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.526 "dma_device_type": 2 00:08:25.526 } 00:08:25.526 ], 00:08:25.526 "driver_specific": {} 00:08:25.526 } 00:08:25.526 ] 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.526 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.527 BaseBdev3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.527 [ 00:08:25.527 { 00:08:25.527 "name": "BaseBdev3", 00:08:25.527 "aliases": [ 00:08:25.527 "00ece97f-b816-462b-88d2-f65c101b9ae2" 00:08:25.527 ], 00:08:25.527 "product_name": "Malloc disk", 00:08:25.527 "block_size": 512, 00:08:25.527 "num_blocks": 65536, 00:08:25.527 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:25.527 "assigned_rate_limits": { 00:08:25.527 "rw_ios_per_sec": 0, 00:08:25.527 "rw_mbytes_per_sec": 0, 00:08:25.527 "r_mbytes_per_sec": 0, 00:08:25.527 "w_mbytes_per_sec": 0 00:08:25.527 }, 00:08:25.527 "claimed": false, 00:08:25.527 "zoned": false, 00:08:25.527 "supported_io_types": { 00:08:25.527 "read": true, 00:08:25.527 "write": true, 00:08:25.527 "unmap": true, 00:08:25.527 "flush": true, 00:08:25.527 "reset": true, 00:08:25.527 "nvme_admin": false, 00:08:25.527 "nvme_io": false, 00:08:25.527 "nvme_io_md": false, 00:08:25.527 "write_zeroes": true, 00:08:25.527 "zcopy": true, 00:08:25.527 "get_zone_info": false, 00:08:25.527 "zone_management": false, 00:08:25.527 "zone_append": false, 00:08:25.527 "compare": false, 00:08:25.527 "compare_and_write": false, 00:08:25.527 "abort": true, 00:08:25.527 "seek_hole": false, 00:08:25.527 "seek_data": false, 00:08:25.527 "copy": true, 00:08:25.527 "nvme_iov_md": false 00:08:25.527 }, 00:08:25.527 "memory_domains": [ 00:08:25.527 { 00:08:25.527 "dma_device_id": "system", 00:08:25.527 "dma_device_type": 1 00:08:25.527 }, 00:08:25.527 { 00:08:25.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.527 "dma_device_type": 2 00:08:25.527 } 00:08:25.527 ], 00:08:25.527 "driver_specific": {} 00:08:25.527 } 00:08:25.527 ] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.527 [2024-12-12 19:37:08.243159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.527 [2024-12-12 19:37:08.243289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.527 [2024-12-12 19:37:08.243360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.527 [2024-12-12 19:37:08.245577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.527 "name": "Existed_Raid", 00:08:25.527 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:25.527 "strip_size_kb": 64, 00:08:25.527 "state": "configuring", 00:08:25.527 "raid_level": "raid0", 00:08:25.527 "superblock": true, 00:08:25.527 "num_base_bdevs": 3, 00:08:25.527 "num_base_bdevs_discovered": 2, 00:08:25.527 "num_base_bdevs_operational": 3, 00:08:25.527 "base_bdevs_list": [ 00:08:25.527 { 00:08:25.527 "name": "BaseBdev1", 00:08:25.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.527 "is_configured": false, 00:08:25.527 "data_offset": 0, 00:08:25.527 "data_size": 0 00:08:25.527 }, 00:08:25.527 { 00:08:25.527 "name": "BaseBdev2", 00:08:25.527 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:25.527 "is_configured": true, 00:08:25.527 "data_offset": 2048, 00:08:25.527 "data_size": 63488 00:08:25.527 }, 00:08:25.527 { 00:08:25.527 "name": "BaseBdev3", 00:08:25.527 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:25.527 "is_configured": true, 00:08:25.527 "data_offset": 2048, 00:08:25.527 "data_size": 63488 00:08:25.527 } 00:08:25.527 ] 00:08:25.527 }' 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.527 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.092 [2024-12-12 19:37:08.650534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.092 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.093 "name": "Existed_Raid", 00:08:26.093 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:26.093 "strip_size_kb": 64, 00:08:26.093 "state": "configuring", 00:08:26.093 "raid_level": "raid0", 00:08:26.093 "superblock": true, 00:08:26.093 "num_base_bdevs": 3, 00:08:26.093 "num_base_bdevs_discovered": 1, 00:08:26.093 "num_base_bdevs_operational": 3, 00:08:26.093 "base_bdevs_list": [ 00:08:26.093 { 00:08:26.093 "name": "BaseBdev1", 00:08:26.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.093 "is_configured": false, 00:08:26.093 "data_offset": 0, 00:08:26.093 "data_size": 0 00:08:26.093 }, 00:08:26.093 { 00:08:26.093 "name": null, 00:08:26.093 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:26.093 "is_configured": false, 00:08:26.093 "data_offset": 0, 00:08:26.093 "data_size": 63488 00:08:26.093 }, 00:08:26.093 { 00:08:26.093 "name": "BaseBdev3", 00:08:26.093 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:26.093 "is_configured": true, 00:08:26.093 "data_offset": 2048, 00:08:26.093 "data_size": 63488 00:08:26.093 } 00:08:26.093 ] 00:08:26.093 }' 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.093 19:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 [2024-12-12 19:37:09.122520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.378 BaseBdev1 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.378 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 [ 00:08:26.378 { 00:08:26.378 "name": "BaseBdev1", 00:08:26.378 "aliases": [ 00:08:26.378 "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f" 00:08:26.378 ], 00:08:26.378 "product_name": "Malloc disk", 00:08:26.378 "block_size": 512, 00:08:26.378 "num_blocks": 65536, 00:08:26.378 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:26.378 "assigned_rate_limits": { 00:08:26.378 "rw_ios_per_sec": 0, 00:08:26.378 "rw_mbytes_per_sec": 0, 00:08:26.378 "r_mbytes_per_sec": 0, 00:08:26.378 "w_mbytes_per_sec": 0 00:08:26.378 }, 00:08:26.378 "claimed": true, 00:08:26.378 "claim_type": "exclusive_write", 00:08:26.379 "zoned": false, 00:08:26.379 "supported_io_types": { 00:08:26.379 "read": true, 00:08:26.379 "write": true, 00:08:26.379 "unmap": true, 00:08:26.379 "flush": true, 00:08:26.379 "reset": true, 00:08:26.379 "nvme_admin": false, 00:08:26.379 "nvme_io": false, 00:08:26.379 "nvme_io_md": false, 00:08:26.379 "write_zeroes": true, 00:08:26.379 "zcopy": true, 00:08:26.379 "get_zone_info": false, 00:08:26.379 "zone_management": false, 00:08:26.379 "zone_append": false, 00:08:26.379 "compare": false, 00:08:26.379 "compare_and_write": false, 00:08:26.379 "abort": true, 00:08:26.379 "seek_hole": false, 00:08:26.379 "seek_data": false, 00:08:26.379 "copy": true, 00:08:26.379 "nvme_iov_md": false 00:08:26.379 }, 00:08:26.379 "memory_domains": [ 00:08:26.379 { 00:08:26.379 "dma_device_id": "system", 00:08:26.379 "dma_device_type": 1 00:08:26.379 }, 00:08:26.379 { 00:08:26.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.379 "dma_device_type": 2 00:08:26.379 } 00:08:26.379 ], 00:08:26.379 "driver_specific": {} 00:08:26.379 } 00:08:26.379 ] 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.379 "name": "Existed_Raid", 00:08:26.379 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:26.379 "strip_size_kb": 64, 00:08:26.379 "state": "configuring", 00:08:26.379 "raid_level": "raid0", 00:08:26.379 "superblock": true, 00:08:26.379 "num_base_bdevs": 3, 00:08:26.379 "num_base_bdevs_discovered": 2, 00:08:26.379 "num_base_bdevs_operational": 3, 00:08:26.379 "base_bdevs_list": [ 00:08:26.379 { 00:08:26.379 "name": "BaseBdev1", 00:08:26.379 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:26.379 "is_configured": true, 00:08:26.379 "data_offset": 2048, 00:08:26.379 "data_size": 63488 00:08:26.379 }, 00:08:26.379 { 00:08:26.379 "name": null, 00:08:26.379 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:26.379 "is_configured": false, 00:08:26.379 "data_offset": 0, 00:08:26.379 "data_size": 63488 00:08:26.379 }, 00:08:26.379 { 00:08:26.379 "name": "BaseBdev3", 00:08:26.379 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:26.379 "is_configured": true, 00:08:26.379 "data_offset": 2048, 00:08:26.379 "data_size": 63488 00:08:26.379 } 00:08:26.379 ] 00:08:26.379 }' 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.379 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.956 [2024-12-12 19:37:09.565877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.956 "name": "Existed_Raid", 00:08:26.956 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:26.956 "strip_size_kb": 64, 00:08:26.956 "state": "configuring", 00:08:26.956 "raid_level": "raid0", 00:08:26.956 "superblock": true, 00:08:26.956 "num_base_bdevs": 3, 00:08:26.956 "num_base_bdevs_discovered": 1, 00:08:26.956 "num_base_bdevs_operational": 3, 00:08:26.956 "base_bdevs_list": [ 00:08:26.956 { 00:08:26.956 "name": "BaseBdev1", 00:08:26.956 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:26.956 "is_configured": true, 00:08:26.956 "data_offset": 2048, 00:08:26.956 "data_size": 63488 00:08:26.956 }, 00:08:26.956 { 00:08:26.956 "name": null, 00:08:26.956 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:26.956 "is_configured": false, 00:08:26.956 "data_offset": 0, 00:08:26.956 "data_size": 63488 00:08:26.956 }, 00:08:26.956 { 00:08:26.956 "name": null, 00:08:26.956 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:26.956 "is_configured": false, 00:08:26.956 "data_offset": 0, 00:08:26.956 "data_size": 63488 00:08:26.956 } 00:08:26.956 ] 00:08:26.956 }' 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.956 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 [2024-12-12 19:37:09.933721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.214 "name": "Existed_Raid", 00:08:27.214 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:27.214 "strip_size_kb": 64, 00:08:27.214 "state": "configuring", 00:08:27.214 "raid_level": "raid0", 00:08:27.214 "superblock": true, 00:08:27.214 "num_base_bdevs": 3, 00:08:27.214 "num_base_bdevs_discovered": 2, 00:08:27.214 "num_base_bdevs_operational": 3, 00:08:27.214 "base_bdevs_list": [ 00:08:27.214 { 00:08:27.214 "name": "BaseBdev1", 00:08:27.214 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:27.214 "is_configured": true, 00:08:27.214 "data_offset": 2048, 00:08:27.214 "data_size": 63488 00:08:27.214 }, 00:08:27.214 { 00:08:27.214 "name": null, 00:08:27.214 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:27.214 "is_configured": false, 00:08:27.214 "data_offset": 0, 00:08:27.214 "data_size": 63488 00:08:27.214 }, 00:08:27.214 { 00:08:27.214 "name": "BaseBdev3", 00:08:27.214 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:27.214 "is_configured": true, 00:08:27.214 "data_offset": 2048, 00:08:27.214 "data_size": 63488 00:08:27.214 } 00:08:27.214 ] 00:08:27.214 }' 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.214 19:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.473 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.473 [2024-12-12 19:37:10.289458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.732 "name": "Existed_Raid", 00:08:27.732 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:27.732 "strip_size_kb": 64, 00:08:27.732 "state": "configuring", 00:08:27.732 "raid_level": "raid0", 00:08:27.732 "superblock": true, 00:08:27.732 "num_base_bdevs": 3, 00:08:27.732 "num_base_bdevs_discovered": 1, 00:08:27.732 "num_base_bdevs_operational": 3, 00:08:27.732 "base_bdevs_list": [ 00:08:27.732 { 00:08:27.732 "name": null, 00:08:27.732 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:27.732 "is_configured": false, 00:08:27.732 "data_offset": 0, 00:08:27.732 "data_size": 63488 00:08:27.732 }, 00:08:27.732 { 00:08:27.732 "name": null, 00:08:27.732 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:27.732 "is_configured": false, 00:08:27.732 "data_offset": 0, 00:08:27.732 "data_size": 63488 00:08:27.732 }, 00:08:27.732 { 00:08:27.732 "name": "BaseBdev3", 00:08:27.732 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:27.732 "is_configured": true, 00:08:27.732 "data_offset": 2048, 00:08:27.732 "data_size": 63488 00:08:27.732 } 00:08:27.732 ] 00:08:27.732 }' 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.732 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.991 [2024-12-12 19:37:10.816968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.991 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.250 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.250 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.250 "name": "Existed_Raid", 00:08:28.250 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:28.250 "strip_size_kb": 64, 00:08:28.250 "state": "configuring", 00:08:28.250 "raid_level": "raid0", 00:08:28.250 "superblock": true, 00:08:28.250 "num_base_bdevs": 3, 00:08:28.250 "num_base_bdevs_discovered": 2, 00:08:28.250 "num_base_bdevs_operational": 3, 00:08:28.250 "base_bdevs_list": [ 00:08:28.250 { 00:08:28.250 "name": null, 00:08:28.250 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:28.250 "is_configured": false, 00:08:28.250 "data_offset": 0, 00:08:28.250 "data_size": 63488 00:08:28.250 }, 00:08:28.250 { 00:08:28.250 "name": "BaseBdev2", 00:08:28.250 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:28.250 "is_configured": true, 00:08:28.250 "data_offset": 2048, 00:08:28.250 "data_size": 63488 00:08:28.250 }, 00:08:28.250 { 00:08:28.250 "name": "BaseBdev3", 00:08:28.250 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:28.250 "is_configured": true, 00:08:28.250 "data_offset": 2048, 00:08:28.250 "data_size": 63488 00:08:28.250 } 00:08:28.250 ] 00:08:28.250 }' 00:08:28.250 19:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.250 19:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.509 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2a6dd3a-5426-4be8-8186-b5c5b4972c1f 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.768 [2024-12-12 19:37:11.412306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:28.768 [2024-12-12 19:37:11.412579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.768 [2024-12-12 19:37:11.412598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.768 [2024-12-12 19:37:11.412858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.768 [2024-12-12 19:37:11.412993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.768 [2024-12-12 19:37:11.413009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:28.768 [2024-12-12 19:37:11.413213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.768 NewBaseBdev 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.768 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.768 [ 00:08:28.768 { 00:08:28.768 "name": "NewBaseBdev", 00:08:28.768 "aliases": [ 00:08:28.768 "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f" 00:08:28.768 ], 00:08:28.768 "product_name": "Malloc disk", 00:08:28.768 "block_size": 512, 00:08:28.768 "num_blocks": 65536, 00:08:28.768 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:28.768 "assigned_rate_limits": { 00:08:28.768 "rw_ios_per_sec": 0, 00:08:28.768 "rw_mbytes_per_sec": 0, 00:08:28.768 "r_mbytes_per_sec": 0, 00:08:28.768 "w_mbytes_per_sec": 0 00:08:28.768 }, 00:08:28.768 "claimed": true, 00:08:28.768 "claim_type": "exclusive_write", 00:08:28.768 "zoned": false, 00:08:28.768 "supported_io_types": { 00:08:28.768 "read": true, 00:08:28.768 "write": true, 00:08:28.768 "unmap": true, 00:08:28.768 "flush": true, 00:08:28.768 "reset": true, 00:08:28.768 "nvme_admin": false, 00:08:28.768 "nvme_io": false, 00:08:28.768 "nvme_io_md": false, 00:08:28.768 "write_zeroes": true, 00:08:28.768 "zcopy": true, 00:08:28.768 "get_zone_info": false, 00:08:28.768 "zone_management": false, 00:08:28.768 "zone_append": false, 00:08:28.769 "compare": false, 00:08:28.769 "compare_and_write": false, 00:08:28.769 "abort": true, 00:08:28.769 "seek_hole": false, 00:08:28.769 "seek_data": false, 00:08:28.769 "copy": true, 00:08:28.769 "nvme_iov_md": false 00:08:28.769 }, 00:08:28.769 "memory_domains": [ 00:08:28.769 { 00:08:28.769 "dma_device_id": "system", 00:08:28.769 "dma_device_type": 1 00:08:28.769 }, 00:08:28.769 { 00:08:28.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.769 "dma_device_type": 2 00:08:28.769 } 00:08:28.769 ], 00:08:28.769 "driver_specific": {} 00:08:28.769 } 00:08:28.769 ] 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.769 "name": "Existed_Raid", 00:08:28.769 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:28.769 "strip_size_kb": 64, 00:08:28.769 "state": "online", 00:08:28.769 "raid_level": "raid0", 00:08:28.769 "superblock": true, 00:08:28.769 "num_base_bdevs": 3, 00:08:28.769 "num_base_bdevs_discovered": 3, 00:08:28.769 "num_base_bdevs_operational": 3, 00:08:28.769 "base_bdevs_list": [ 00:08:28.769 { 00:08:28.769 "name": "NewBaseBdev", 00:08:28.769 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:28.769 "is_configured": true, 00:08:28.769 "data_offset": 2048, 00:08:28.769 "data_size": 63488 00:08:28.769 }, 00:08:28.769 { 00:08:28.769 "name": "BaseBdev2", 00:08:28.769 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:28.769 "is_configured": true, 00:08:28.769 "data_offset": 2048, 00:08:28.769 "data_size": 63488 00:08:28.769 }, 00:08:28.769 { 00:08:28.769 "name": "BaseBdev3", 00:08:28.769 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:28.769 "is_configured": true, 00:08:28.769 "data_offset": 2048, 00:08:28.769 "data_size": 63488 00:08:28.769 } 00:08:28.769 ] 00:08:28.769 }' 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.769 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.028 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.028 [2024-12-12 19:37:11.851920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.287 "name": "Existed_Raid", 00:08:29.287 "aliases": [ 00:08:29.287 "c47fc1c4-52ad-4359-92f1-9d4b651d5387" 00:08:29.287 ], 00:08:29.287 "product_name": "Raid Volume", 00:08:29.287 "block_size": 512, 00:08:29.287 "num_blocks": 190464, 00:08:29.287 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:29.287 "assigned_rate_limits": { 00:08:29.287 "rw_ios_per_sec": 0, 00:08:29.287 "rw_mbytes_per_sec": 0, 00:08:29.287 "r_mbytes_per_sec": 0, 00:08:29.287 "w_mbytes_per_sec": 0 00:08:29.287 }, 00:08:29.287 "claimed": false, 00:08:29.287 "zoned": false, 00:08:29.287 "supported_io_types": { 00:08:29.287 "read": true, 00:08:29.287 "write": true, 00:08:29.287 "unmap": true, 00:08:29.287 "flush": true, 00:08:29.287 "reset": true, 00:08:29.287 "nvme_admin": false, 00:08:29.287 "nvme_io": false, 00:08:29.287 "nvme_io_md": false, 00:08:29.287 "write_zeroes": true, 00:08:29.287 "zcopy": false, 00:08:29.287 "get_zone_info": false, 00:08:29.287 "zone_management": false, 00:08:29.287 "zone_append": false, 00:08:29.287 "compare": false, 00:08:29.287 "compare_and_write": false, 00:08:29.287 "abort": false, 00:08:29.287 "seek_hole": false, 00:08:29.287 "seek_data": false, 00:08:29.287 "copy": false, 00:08:29.287 "nvme_iov_md": false 00:08:29.287 }, 00:08:29.287 "memory_domains": [ 00:08:29.287 { 00:08:29.287 "dma_device_id": "system", 00:08:29.287 "dma_device_type": 1 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.287 "dma_device_type": 2 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "dma_device_id": "system", 00:08:29.287 "dma_device_type": 1 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.287 "dma_device_type": 2 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "dma_device_id": "system", 00:08:29.287 "dma_device_type": 1 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.287 "dma_device_type": 2 00:08:29.287 } 00:08:29.287 ], 00:08:29.287 "driver_specific": { 00:08:29.287 "raid": { 00:08:29.287 "uuid": "c47fc1c4-52ad-4359-92f1-9d4b651d5387", 00:08:29.287 "strip_size_kb": 64, 00:08:29.287 "state": "online", 00:08:29.287 "raid_level": "raid0", 00:08:29.287 "superblock": true, 00:08:29.287 "num_base_bdevs": 3, 00:08:29.287 "num_base_bdevs_discovered": 3, 00:08:29.287 "num_base_bdevs_operational": 3, 00:08:29.287 "base_bdevs_list": [ 00:08:29.287 { 00:08:29.287 "name": "NewBaseBdev", 00:08:29.287 "uuid": "b2a6dd3a-5426-4be8-8186-b5c5b4972c1f", 00:08:29.287 "is_configured": true, 00:08:29.287 "data_offset": 2048, 00:08:29.287 "data_size": 63488 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "name": "BaseBdev2", 00:08:29.287 "uuid": "9fed886a-11b0-45f4-be51-7ee10bfd8de6", 00:08:29.287 "is_configured": true, 00:08:29.287 "data_offset": 2048, 00:08:29.287 "data_size": 63488 00:08:29.287 }, 00:08:29.287 { 00:08:29.287 "name": "BaseBdev3", 00:08:29.287 "uuid": "00ece97f-b816-462b-88d2-f65c101b9ae2", 00:08:29.287 "is_configured": true, 00:08:29.287 "data_offset": 2048, 00:08:29.287 "data_size": 63488 00:08:29.287 } 00:08:29.287 ] 00:08:29.287 } 00:08:29.287 } 00:08:29.287 }' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:29.287 BaseBdev2 00:08:29.287 BaseBdev3' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 19:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.547 [2024-12-12 19:37:12.139122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.547 [2024-12-12 19:37:12.139155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.547 [2024-12-12 19:37:12.139244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.547 [2024-12-12 19:37:12.139302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.547 [2024-12-12 19:37:12.139314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66132 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66132 ']' 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66132 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66132 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66132' 00:08:29.547 killing process with pid 66132 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66132 00:08:29.547 [2024-12-12 19:37:12.187882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.547 19:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66132 00:08:29.806 [2024-12-12 19:37:12.484336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.180 19:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.180 00:08:31.180 real 0m10.056s 00:08:31.180 user 0m15.923s 00:08:31.180 sys 0m1.490s 00:08:31.180 19:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.180 ************************************ 00:08:31.180 END TEST raid_state_function_test_sb 00:08:31.180 ************************************ 00:08:31.180 19:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.180 19:37:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:31.180 19:37:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:31.180 19:37:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.180 19:37:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.180 ************************************ 00:08:31.180 START TEST raid_superblock_test 00:08:31.180 ************************************ 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66747 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66747 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66747 ']' 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.180 19:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.180 [2024-12-12 19:37:13.767047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.181 [2024-12-12 19:37:13.767182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66747 ] 00:08:31.181 [2024-12-12 19:37:13.944991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.439 [2024-12-12 19:37:14.064266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.439 [2024-12-12 19:37:14.262672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.439 [2024-12-12 19:37:14.262729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 malloc1 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 [2024-12-12 19:37:14.648465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.007 [2024-12-12 19:37:14.648611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.007 [2024-12-12 19:37:14.648671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.007 [2024-12-12 19:37:14.648741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.007 [2024-12-12 19:37:14.651106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.007 [2024-12-12 19:37:14.651181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.007 pt1 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 malloc2 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 [2024-12-12 19:37:14.707104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.007 [2024-12-12 19:37:14.707160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.007 [2024-12-12 19:37:14.707189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.007 [2024-12-12 19:37:14.707201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.007 [2024-12-12 19:37:14.709296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.007 [2024-12-12 19:37:14.709368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.007 pt2 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 malloc3 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.007 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 [2024-12-12 19:37:14.773249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.007 [2024-12-12 19:37:14.773359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.007 [2024-12-12 19:37:14.773413] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:32.007 [2024-12-12 19:37:14.773460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.007 [2024-12-12 19:37:14.775765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.007 [2024-12-12 19:37:14.775838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.007 pt3 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 [2024-12-12 19:37:14.785300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.008 [2024-12-12 19:37:14.787286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.008 [2024-12-12 19:37:14.787400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.008 [2024-12-12 19:37:14.787631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.008 [2024-12-12 19:37:14.787683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.008 [2024-12-12 19:37:14.787962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:32.008 [2024-12-12 19:37:14.788174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.008 [2024-12-12 19:37:14.788215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:32.008 [2024-12-12 19:37:14.788445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.008 "name": "raid_bdev1", 00:08:32.008 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:32.008 "strip_size_kb": 64, 00:08:32.008 "state": "online", 00:08:32.008 "raid_level": "raid0", 00:08:32.008 "superblock": true, 00:08:32.008 "num_base_bdevs": 3, 00:08:32.008 "num_base_bdevs_discovered": 3, 00:08:32.008 "num_base_bdevs_operational": 3, 00:08:32.008 "base_bdevs_list": [ 00:08:32.008 { 00:08:32.008 "name": "pt1", 00:08:32.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.008 "is_configured": true, 00:08:32.008 "data_offset": 2048, 00:08:32.008 "data_size": 63488 00:08:32.008 }, 00:08:32.008 { 00:08:32.008 "name": "pt2", 00:08:32.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.008 "is_configured": true, 00:08:32.008 "data_offset": 2048, 00:08:32.008 "data_size": 63488 00:08:32.008 }, 00:08:32.008 { 00:08:32.008 "name": "pt3", 00:08:32.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.008 "is_configured": true, 00:08:32.008 "data_offset": 2048, 00:08:32.008 "data_size": 63488 00:08:32.008 } 00:08:32.008 ] 00:08:32.008 }' 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.008 19:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.573 [2024-12-12 19:37:15.248816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.573 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.573 "name": "raid_bdev1", 00:08:32.573 "aliases": [ 00:08:32.573 "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1" 00:08:32.573 ], 00:08:32.573 "product_name": "Raid Volume", 00:08:32.573 "block_size": 512, 00:08:32.573 "num_blocks": 190464, 00:08:32.573 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:32.573 "assigned_rate_limits": { 00:08:32.573 "rw_ios_per_sec": 0, 00:08:32.573 "rw_mbytes_per_sec": 0, 00:08:32.573 "r_mbytes_per_sec": 0, 00:08:32.573 "w_mbytes_per_sec": 0 00:08:32.574 }, 00:08:32.574 "claimed": false, 00:08:32.574 "zoned": false, 00:08:32.574 "supported_io_types": { 00:08:32.574 "read": true, 00:08:32.574 "write": true, 00:08:32.574 "unmap": true, 00:08:32.574 "flush": true, 00:08:32.574 "reset": true, 00:08:32.574 "nvme_admin": false, 00:08:32.574 "nvme_io": false, 00:08:32.574 "nvme_io_md": false, 00:08:32.574 "write_zeroes": true, 00:08:32.574 "zcopy": false, 00:08:32.574 "get_zone_info": false, 00:08:32.574 "zone_management": false, 00:08:32.574 "zone_append": false, 00:08:32.574 "compare": false, 00:08:32.574 "compare_and_write": false, 00:08:32.574 "abort": false, 00:08:32.574 "seek_hole": false, 00:08:32.574 "seek_data": false, 00:08:32.574 "copy": false, 00:08:32.574 "nvme_iov_md": false 00:08:32.574 }, 00:08:32.574 "memory_domains": [ 00:08:32.574 { 00:08:32.574 "dma_device_id": "system", 00:08:32.574 "dma_device_type": 1 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.574 "dma_device_type": 2 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "dma_device_id": "system", 00:08:32.574 "dma_device_type": 1 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.574 "dma_device_type": 2 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "dma_device_id": "system", 00:08:32.574 "dma_device_type": 1 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.574 "dma_device_type": 2 00:08:32.574 } 00:08:32.574 ], 00:08:32.574 "driver_specific": { 00:08:32.574 "raid": { 00:08:32.574 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:32.574 "strip_size_kb": 64, 00:08:32.574 "state": "online", 00:08:32.574 "raid_level": "raid0", 00:08:32.574 "superblock": true, 00:08:32.574 "num_base_bdevs": 3, 00:08:32.574 "num_base_bdevs_discovered": 3, 00:08:32.574 "num_base_bdevs_operational": 3, 00:08:32.574 "base_bdevs_list": [ 00:08:32.574 { 00:08:32.574 "name": "pt1", 00:08:32.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.574 "is_configured": true, 00:08:32.574 "data_offset": 2048, 00:08:32.574 "data_size": 63488 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "name": "pt2", 00:08:32.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.574 "is_configured": true, 00:08:32.574 "data_offset": 2048, 00:08:32.574 "data_size": 63488 00:08:32.574 }, 00:08:32.574 { 00:08:32.574 "name": "pt3", 00:08:32.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.574 "is_configured": true, 00:08:32.574 "data_offset": 2048, 00:08:32.574 "data_size": 63488 00:08:32.574 } 00:08:32.574 ] 00:08:32.574 } 00:08:32.574 } 00:08:32.574 }' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.574 pt2 00:08:32.574 pt3' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.574 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 [2024-12-12 19:37:15.548240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=01fa2e24-99e5-49fe-9f2f-25bad3e70bf1 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 01fa2e24-99e5-49fe-9f2f-25bad3e70bf1 ']' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 [2024-12-12 19:37:15.579877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.832 [2024-12-12 19:37:15.579904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.832 [2024-12-12 19:37:15.579980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.832 [2024-12-12 19:37:15.580040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.832 [2024-12-12 19:37:15.580050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:32.832 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.833 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.091 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.091 [2024-12-12 19:37:15.731683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.091 [2024-12-12 19:37:15.733458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.091 [2024-12-12 19:37:15.733504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:33.091 [2024-12-12 19:37:15.733587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.091 [2024-12-12 19:37:15.733656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.091 [2024-12-12 19:37:15.733683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:33.092 [2024-12-12 19:37:15.733704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.092 [2024-12-12 19:37:15.733718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:33.092 request: 00:08:33.092 { 00:08:33.092 "name": "raid_bdev1", 00:08:33.092 "raid_level": "raid0", 00:08:33.092 "base_bdevs": [ 00:08:33.092 "malloc1", 00:08:33.092 "malloc2", 00:08:33.092 "malloc3" 00:08:33.092 ], 00:08:33.092 "strip_size_kb": 64, 00:08:33.092 "superblock": false, 00:08:33.092 "method": "bdev_raid_create", 00:08:33.092 "req_id": 1 00:08:33.092 } 00:08:33.092 Got JSON-RPC error response 00:08:33.092 response: 00:08:33.092 { 00:08:33.092 "code": -17, 00:08:33.092 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.092 } 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.092 [2024-12-12 19:37:15.787502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.092 [2024-12-12 19:37:15.787606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.092 [2024-12-12 19:37:15.787655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:33.092 [2024-12-12 19:37:15.787696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.092 [2024-12-12 19:37:15.789918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.092 [2024-12-12 19:37:15.789991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.092 [2024-12-12 19:37:15.790124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.092 [2024-12-12 19:37:15.790226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.092 pt1 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.092 "name": "raid_bdev1", 00:08:33.092 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:33.092 "strip_size_kb": 64, 00:08:33.092 "state": "configuring", 00:08:33.092 "raid_level": "raid0", 00:08:33.092 "superblock": true, 00:08:33.092 "num_base_bdevs": 3, 00:08:33.092 "num_base_bdevs_discovered": 1, 00:08:33.092 "num_base_bdevs_operational": 3, 00:08:33.092 "base_bdevs_list": [ 00:08:33.092 { 00:08:33.092 "name": "pt1", 00:08:33.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.092 "is_configured": true, 00:08:33.092 "data_offset": 2048, 00:08:33.092 "data_size": 63488 00:08:33.092 }, 00:08:33.092 { 00:08:33.092 "name": null, 00:08:33.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.092 "is_configured": false, 00:08:33.092 "data_offset": 2048, 00:08:33.092 "data_size": 63488 00:08:33.092 }, 00:08:33.092 { 00:08:33.092 "name": null, 00:08:33.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.092 "is_configured": false, 00:08:33.092 "data_offset": 2048, 00:08:33.092 "data_size": 63488 00:08:33.092 } 00:08:33.092 ] 00:08:33.092 }' 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.092 19:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 [2024-12-12 19:37:16.178895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.351 [2024-12-12 19:37:16.179039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.351 [2024-12-12 19:37:16.179083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:33.351 [2024-12-12 19:37:16.179095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.351 [2024-12-12 19:37:16.179640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.351 [2024-12-12 19:37:16.179669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.351 [2024-12-12 19:37:16.179792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.351 [2024-12-12 19:37:16.179836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.351 pt2 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.351 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 [2024-12-12 19:37:16.190856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.610 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.611 "name": "raid_bdev1", 00:08:33.611 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:33.611 "strip_size_kb": 64, 00:08:33.611 "state": "configuring", 00:08:33.611 "raid_level": "raid0", 00:08:33.611 "superblock": true, 00:08:33.611 "num_base_bdevs": 3, 00:08:33.611 "num_base_bdevs_discovered": 1, 00:08:33.611 "num_base_bdevs_operational": 3, 00:08:33.611 "base_bdevs_list": [ 00:08:33.611 { 00:08:33.611 "name": "pt1", 00:08:33.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.611 "is_configured": true, 00:08:33.611 "data_offset": 2048, 00:08:33.611 "data_size": 63488 00:08:33.611 }, 00:08:33.611 { 00:08:33.611 "name": null, 00:08:33.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.611 "is_configured": false, 00:08:33.611 "data_offset": 0, 00:08:33.611 "data_size": 63488 00:08:33.611 }, 00:08:33.611 { 00:08:33.611 "name": null, 00:08:33.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.611 "is_configured": false, 00:08:33.611 "data_offset": 2048, 00:08:33.611 "data_size": 63488 00:08:33.611 } 00:08:33.611 ] 00:08:33.611 }' 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.611 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.870 [2024-12-12 19:37:16.638167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.870 [2024-12-12 19:37:16.638328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.870 [2024-12-12 19:37:16.638380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:33.870 [2024-12-12 19:37:16.638450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.870 [2024-12-12 19:37:16.639048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.870 [2024-12-12 19:37:16.639126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.870 [2024-12-12 19:37:16.639305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.870 [2024-12-12 19:37:16.639376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.870 pt2 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.870 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.870 [2024-12-12 19:37:16.650108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:33.870 [2024-12-12 19:37:16.650198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.870 [2024-12-12 19:37:16.650242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:33.870 [2024-12-12 19:37:16.650290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.870 [2024-12-12 19:37:16.650769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.870 [2024-12-12 19:37:16.650858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:33.870 [2024-12-12 19:37:16.651000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:33.870 [2024-12-12 19:37:16.651063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:33.870 [2024-12-12 19:37:16.651245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.870 [2024-12-12 19:37:16.651288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.870 [2024-12-12 19:37:16.651597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:33.870 [2024-12-12 19:37:16.651806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.870 [2024-12-12 19:37:16.651847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:33.870 [2024-12-12 19:37:16.652060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.870 pt3 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.871 "name": "raid_bdev1", 00:08:33.871 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:33.871 "strip_size_kb": 64, 00:08:33.871 "state": "online", 00:08:33.871 "raid_level": "raid0", 00:08:33.871 "superblock": true, 00:08:33.871 "num_base_bdevs": 3, 00:08:33.871 "num_base_bdevs_discovered": 3, 00:08:33.871 "num_base_bdevs_operational": 3, 00:08:33.871 "base_bdevs_list": [ 00:08:33.871 { 00:08:33.871 "name": "pt1", 00:08:33.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.871 "is_configured": true, 00:08:33.871 "data_offset": 2048, 00:08:33.871 "data_size": 63488 00:08:33.871 }, 00:08:33.871 { 00:08:33.871 "name": "pt2", 00:08:33.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.871 "is_configured": true, 00:08:33.871 "data_offset": 2048, 00:08:33.871 "data_size": 63488 00:08:33.871 }, 00:08:33.871 { 00:08:33.871 "name": "pt3", 00:08:33.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.871 "is_configured": true, 00:08:33.871 "data_offset": 2048, 00:08:33.871 "data_size": 63488 00:08:33.871 } 00:08:33.871 ] 00:08:33.871 }' 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.871 19:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.482 [2024-12-12 19:37:17.138101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.482 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.482 "name": "raid_bdev1", 00:08:34.482 "aliases": [ 00:08:34.482 "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1" 00:08:34.482 ], 00:08:34.482 "product_name": "Raid Volume", 00:08:34.482 "block_size": 512, 00:08:34.482 "num_blocks": 190464, 00:08:34.482 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:34.482 "assigned_rate_limits": { 00:08:34.482 "rw_ios_per_sec": 0, 00:08:34.482 "rw_mbytes_per_sec": 0, 00:08:34.482 "r_mbytes_per_sec": 0, 00:08:34.482 "w_mbytes_per_sec": 0 00:08:34.482 }, 00:08:34.482 "claimed": false, 00:08:34.482 "zoned": false, 00:08:34.482 "supported_io_types": { 00:08:34.482 "read": true, 00:08:34.482 "write": true, 00:08:34.482 "unmap": true, 00:08:34.482 "flush": true, 00:08:34.482 "reset": true, 00:08:34.482 "nvme_admin": false, 00:08:34.482 "nvme_io": false, 00:08:34.482 "nvme_io_md": false, 00:08:34.482 "write_zeroes": true, 00:08:34.482 "zcopy": false, 00:08:34.482 "get_zone_info": false, 00:08:34.482 "zone_management": false, 00:08:34.482 "zone_append": false, 00:08:34.482 "compare": false, 00:08:34.482 "compare_and_write": false, 00:08:34.482 "abort": false, 00:08:34.482 "seek_hole": false, 00:08:34.482 "seek_data": false, 00:08:34.482 "copy": false, 00:08:34.482 "nvme_iov_md": false 00:08:34.482 }, 00:08:34.482 "memory_domains": [ 00:08:34.482 { 00:08:34.482 "dma_device_id": "system", 00:08:34.482 "dma_device_type": 1 00:08:34.482 }, 00:08:34.482 { 00:08:34.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.482 "dma_device_type": 2 00:08:34.482 }, 00:08:34.483 { 00:08:34.483 "dma_device_id": "system", 00:08:34.483 "dma_device_type": 1 00:08:34.483 }, 00:08:34.483 { 00:08:34.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.483 "dma_device_type": 2 00:08:34.483 }, 00:08:34.483 { 00:08:34.483 "dma_device_id": "system", 00:08:34.483 "dma_device_type": 1 00:08:34.483 }, 00:08:34.483 { 00:08:34.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.483 "dma_device_type": 2 00:08:34.483 } 00:08:34.483 ], 00:08:34.483 "driver_specific": { 00:08:34.483 "raid": { 00:08:34.483 "uuid": "01fa2e24-99e5-49fe-9f2f-25bad3e70bf1", 00:08:34.483 "strip_size_kb": 64, 00:08:34.483 "state": "online", 00:08:34.483 "raid_level": "raid0", 00:08:34.483 "superblock": true, 00:08:34.483 "num_base_bdevs": 3, 00:08:34.483 "num_base_bdevs_discovered": 3, 00:08:34.483 "num_base_bdevs_operational": 3, 00:08:34.483 "base_bdevs_list": [ 00:08:34.483 { 00:08:34.483 "name": "pt1", 00:08:34.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.483 "is_configured": true, 00:08:34.483 "data_offset": 2048, 00:08:34.483 "data_size": 63488 00:08:34.483 }, 00:08:34.483 { 00:08:34.483 "name": "pt2", 00:08:34.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.483 "is_configured": true, 00:08:34.483 "data_offset": 2048, 00:08:34.483 "data_size": 63488 00:08:34.483 }, 00:08:34.483 { 00:08:34.483 "name": "pt3", 00:08:34.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.483 "is_configured": true, 00:08:34.483 "data_offset": 2048, 00:08:34.483 "data_size": 63488 00:08:34.483 } 00:08:34.483 ] 00:08:34.483 } 00:08:34.483 } 00:08:34.483 }' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.483 pt2 00:08:34.483 pt3' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.483 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.757 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.757 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.757 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.757 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.757 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.758 [2024-12-12 19:37:17.414037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 01fa2e24-99e5-49fe-9f2f-25bad3e70bf1 '!=' 01fa2e24-99e5-49fe-9f2f-25bad3e70bf1 ']' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66747 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66747 ']' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66747 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66747 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66747' 00:08:34.758 killing process with pid 66747 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66747 00:08:34.758 [2024-12-12 19:37:17.486895] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.758 19:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66747 00:08:34.758 [2024-12-12 19:37:17.487093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.758 [2024-12-12 19:37:17.487167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.758 [2024-12-12 19:37:17.487245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:35.018 [2024-12-12 19:37:17.856953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.396 19:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:36.396 00:08:36.396 real 0m5.529s 00:08:36.396 user 0m7.831s 00:08:36.396 sys 0m0.818s 00:08:36.396 19:37:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.396 19:37:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.396 ************************************ 00:08:36.396 END TEST raid_superblock_test 00:08:36.396 ************************************ 00:08:36.656 19:37:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:36.656 19:37:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.656 19:37:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.656 19:37:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.656 ************************************ 00:08:36.656 START TEST raid_read_error_test 00:08:36.656 ************************************ 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.85xFua4jkk 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67011 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67011 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67011 ']' 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.656 19:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.656 [2024-12-12 19:37:19.375904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:36.656 [2024-12-12 19:37:19.376034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67011 ] 00:08:36.915 [2024-12-12 19:37:19.549242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.915 [2024-12-12 19:37:19.656532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.175 [2024-12-12 19:37:19.848984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.175 [2024-12-12 19:37:19.849028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.435 BaseBdev1_malloc 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.435 true 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.435 [2024-12-12 19:37:20.261689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.435 [2024-12-12 19:37:20.261744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.435 [2024-12-12 19:37:20.261764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.435 [2024-12-12 19:37:20.261775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.435 [2024-12-12 19:37:20.263937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.435 [2024-12-12 19:37:20.264046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.435 BaseBdev1 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.435 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 BaseBdev2_malloc 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 true 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 [2024-12-12 19:37:20.325243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.695 [2024-12-12 19:37:20.325297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.695 [2024-12-12 19:37:20.325313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:37.695 [2024-12-12 19:37:20.325324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.695 [2024-12-12 19:37:20.327388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.695 [2024-12-12 19:37:20.327480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.695 BaseBdev2 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 BaseBdev3_malloc 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 true 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 [2024-12-12 19:37:20.399930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:37.695 [2024-12-12 19:37:20.400090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.695 [2024-12-12 19:37:20.400115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:37.695 [2024-12-12 19:37:20.400126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.695 [2024-12-12 19:37:20.402504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.695 [2024-12-12 19:37:20.402560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:37.695 BaseBdev3 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.696 [2024-12-12 19:37:20.411964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.696 [2024-12-12 19:37:20.413765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.696 [2024-12-12 19:37:20.413842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.696 [2024-12-12 19:37:20.414055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:37.696 [2024-12-12 19:37:20.414069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.696 [2024-12-12 19:37:20.414334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:37.696 [2024-12-12 19:37:20.414505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:37.696 [2024-12-12 19:37:20.414518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:37.696 [2024-12-12 19:37:20.414678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.696 "name": "raid_bdev1", 00:08:37.696 "uuid": "428e8e1e-676c-4ee1-bc29-ee0be3da4ade", 00:08:37.696 "strip_size_kb": 64, 00:08:37.696 "state": "online", 00:08:37.696 "raid_level": "raid0", 00:08:37.696 "superblock": true, 00:08:37.696 "num_base_bdevs": 3, 00:08:37.696 "num_base_bdevs_discovered": 3, 00:08:37.696 "num_base_bdevs_operational": 3, 00:08:37.696 "base_bdevs_list": [ 00:08:37.696 { 00:08:37.696 "name": "BaseBdev1", 00:08:37.696 "uuid": "0917fb53-794a-5a5e-bf87-c714ec0ff17d", 00:08:37.696 "is_configured": true, 00:08:37.696 "data_offset": 2048, 00:08:37.696 "data_size": 63488 00:08:37.696 }, 00:08:37.696 { 00:08:37.696 "name": "BaseBdev2", 00:08:37.696 "uuid": "ce6317bf-9ff0-52e0-bdfc-e3a95328b02c", 00:08:37.696 "is_configured": true, 00:08:37.696 "data_offset": 2048, 00:08:37.696 "data_size": 63488 00:08:37.696 }, 00:08:37.696 { 00:08:37.696 "name": "BaseBdev3", 00:08:37.696 "uuid": "c3cf5b5c-732f-5ac5-9fa0-21b81cd9002d", 00:08:37.696 "is_configured": true, 00:08:37.696 "data_offset": 2048, 00:08:37.696 "data_size": 63488 00:08:37.696 } 00:08:37.696 ] 00:08:37.696 }' 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.696 19:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.266 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:38.266 19:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.266 [2024-12-12 19:37:20.936428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.206 "name": "raid_bdev1", 00:08:39.206 "uuid": "428e8e1e-676c-4ee1-bc29-ee0be3da4ade", 00:08:39.206 "strip_size_kb": 64, 00:08:39.206 "state": "online", 00:08:39.206 "raid_level": "raid0", 00:08:39.206 "superblock": true, 00:08:39.206 "num_base_bdevs": 3, 00:08:39.206 "num_base_bdevs_discovered": 3, 00:08:39.206 "num_base_bdevs_operational": 3, 00:08:39.206 "base_bdevs_list": [ 00:08:39.206 { 00:08:39.206 "name": "BaseBdev1", 00:08:39.206 "uuid": "0917fb53-794a-5a5e-bf87-c714ec0ff17d", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 }, 00:08:39.206 { 00:08:39.206 "name": "BaseBdev2", 00:08:39.206 "uuid": "ce6317bf-9ff0-52e0-bdfc-e3a95328b02c", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 }, 00:08:39.206 { 00:08:39.206 "name": "BaseBdev3", 00:08:39.206 "uuid": "c3cf5b5c-732f-5ac5-9fa0-21b81cd9002d", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 } 00:08:39.206 ] 00:08:39.206 }' 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.206 19:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.773 [2024-12-12 19:37:22.312978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.773 [2024-12-12 19:37:22.313013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.773 [2024-12-12 19:37:22.315917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.773 [2024-12-12 19:37:22.315961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.773 [2024-12-12 19:37:22.315998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.773 [2024-12-12 19:37:22.316006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:39.773 { 00:08:39.773 "results": [ 00:08:39.773 { 00:08:39.773 "job": "raid_bdev1", 00:08:39.773 "core_mask": "0x1", 00:08:39.773 "workload": "randrw", 00:08:39.773 "percentage": 50, 00:08:39.773 "status": "finished", 00:08:39.773 "queue_depth": 1, 00:08:39.773 "io_size": 131072, 00:08:39.773 "runtime": 1.377467, 00:08:39.773 "iops": 14806.888295690569, 00:08:39.773 "mibps": 1850.8610369613211, 00:08:39.773 "io_failed": 1, 00:08:39.773 "io_timeout": 0, 00:08:39.773 "avg_latency_us": 93.61598916528739, 00:08:39.773 "min_latency_us": 26.717903930131005, 00:08:39.773 "max_latency_us": 1652.709170305677 00:08:39.773 } 00:08:39.773 ], 00:08:39.773 "core_count": 1 00:08:39.773 } 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67011 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67011 ']' 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67011 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67011 00:08:39.773 killing process with pid 67011 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.773 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.774 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67011' 00:08:39.774 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67011 00:08:39.774 [2024-12-12 19:37:22.358296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.774 19:37:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67011 00:08:39.774 [2024-12-12 19:37:22.587560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.85xFua4jkk 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:41.155 00:08:41.155 real 0m4.507s 00:08:41.155 user 0m5.341s 00:08:41.155 sys 0m0.524s 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.155 ************************************ 00:08:41.155 END TEST raid_read_error_test 00:08:41.155 ************************************ 00:08:41.155 19:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.155 19:37:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:41.155 19:37:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:41.155 19:37:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.155 19:37:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.155 ************************************ 00:08:41.155 START TEST raid_write_error_test 00:08:41.155 ************************************ 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AUhmkN0xYK 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67151 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67151 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67151 ']' 00:08:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.155 19:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.155 [2024-12-12 19:37:23.913906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:41.156 [2024-12-12 19:37:23.914026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67151 ] 00:08:41.415 [2024-12-12 19:37:24.089905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.415 [2024-12-12 19:37:24.203878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.687 [2024-12-12 19:37:24.405189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.687 [2024-12-12 19:37:24.405250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.946 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 BaseBdev1_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 true 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 [2024-12-12 19:37:24.829796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:42.206 [2024-12-12 19:37:24.829921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.206 [2024-12-12 19:37:24.829949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:42.206 [2024-12-12 19:37:24.829962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.206 [2024-12-12 19:37:24.832157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.206 [2024-12-12 19:37:24.832199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:42.206 BaseBdev1 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 BaseBdev2_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 true 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 [2024-12-12 19:37:24.897842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:42.206 [2024-12-12 19:37:24.897894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.206 [2024-12-12 19:37:24.897909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:42.206 [2024-12-12 19:37:24.897919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.206 [2024-12-12 19:37:24.900016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.206 [2024-12-12 19:37:24.900054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:42.206 BaseBdev2 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 BaseBdev3_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.206 true 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.206 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.207 [2024-12-12 19:37:24.974566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:42.207 [2024-12-12 19:37:24.974619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.207 [2024-12-12 19:37:24.974635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:42.207 [2024-12-12 19:37:24.974645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.207 [2024-12-12 19:37:24.976782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.207 [2024-12-12 19:37:24.976870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:42.207 BaseBdev3 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.207 [2024-12-12 19:37:24.986629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.207 [2024-12-12 19:37:24.988357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.207 [2024-12-12 19:37:24.988429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.207 [2024-12-12 19:37:24.988628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:42.207 [2024-12-12 19:37:24.988642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.207 [2024-12-12 19:37:24.988874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:42.207 [2024-12-12 19:37:24.989051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:42.207 [2024-12-12 19:37:24.989064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:42.207 [2024-12-12 19:37:24.989198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.207 19:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.207 19:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.207 19:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.207 "name": "raid_bdev1", 00:08:42.207 "uuid": "cd41d1c7-1a62-4e6c-b18f-bee90cb2a38f", 00:08:42.207 "strip_size_kb": 64, 00:08:42.207 "state": "online", 00:08:42.207 "raid_level": "raid0", 00:08:42.207 "superblock": true, 00:08:42.207 "num_base_bdevs": 3, 00:08:42.207 "num_base_bdevs_discovered": 3, 00:08:42.207 "num_base_bdevs_operational": 3, 00:08:42.207 "base_bdevs_list": [ 00:08:42.207 { 00:08:42.207 "name": "BaseBdev1", 00:08:42.207 "uuid": "4cf63328-bfec-58ca-a502-4dc1668c7f33", 00:08:42.207 "is_configured": true, 00:08:42.207 "data_offset": 2048, 00:08:42.207 "data_size": 63488 00:08:42.207 }, 00:08:42.207 { 00:08:42.207 "name": "BaseBdev2", 00:08:42.207 "uuid": "ffa47da3-68f3-5f5e-9507-28d667aecf13", 00:08:42.207 "is_configured": true, 00:08:42.207 "data_offset": 2048, 00:08:42.207 "data_size": 63488 00:08:42.207 }, 00:08:42.207 { 00:08:42.207 "name": "BaseBdev3", 00:08:42.207 "uuid": "c9d531d3-ea40-53dd-85db-fd2f734358a6", 00:08:42.207 "is_configured": true, 00:08:42.207 "data_offset": 2048, 00:08:42.207 "data_size": 63488 00:08:42.207 } 00:08:42.207 ] 00:08:42.207 }' 00:08:42.207 19:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.207 19:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 19:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:42.774 19:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:42.774 [2024-12-12 19:37:25.487099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.713 "name": "raid_bdev1", 00:08:43.713 "uuid": "cd41d1c7-1a62-4e6c-b18f-bee90cb2a38f", 00:08:43.713 "strip_size_kb": 64, 00:08:43.713 "state": "online", 00:08:43.713 "raid_level": "raid0", 00:08:43.713 "superblock": true, 00:08:43.713 "num_base_bdevs": 3, 00:08:43.713 "num_base_bdevs_discovered": 3, 00:08:43.713 "num_base_bdevs_operational": 3, 00:08:43.713 "base_bdevs_list": [ 00:08:43.713 { 00:08:43.713 "name": "BaseBdev1", 00:08:43.713 "uuid": "4cf63328-bfec-58ca-a502-4dc1668c7f33", 00:08:43.713 "is_configured": true, 00:08:43.713 "data_offset": 2048, 00:08:43.713 "data_size": 63488 00:08:43.713 }, 00:08:43.713 { 00:08:43.713 "name": "BaseBdev2", 00:08:43.713 "uuid": "ffa47da3-68f3-5f5e-9507-28d667aecf13", 00:08:43.713 "is_configured": true, 00:08:43.713 "data_offset": 2048, 00:08:43.713 "data_size": 63488 00:08:43.713 }, 00:08:43.713 { 00:08:43.713 "name": "BaseBdev3", 00:08:43.713 "uuid": "c9d531d3-ea40-53dd-85db-fd2f734358a6", 00:08:43.713 "is_configured": true, 00:08:43.713 "data_offset": 2048, 00:08:43.713 "data_size": 63488 00:08:43.713 } 00:08:43.713 ] 00:08:43.713 }' 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.713 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.282 [2024-12-12 19:37:26.863767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.282 [2024-12-12 19:37:26.863862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.282 [2024-12-12 19:37:26.866714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.282 [2024-12-12 19:37:26.866808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.282 [2024-12-12 19:37:26.866883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.282 [2024-12-12 19:37:26.866938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.282 { 00:08:44.282 "results": [ 00:08:44.282 { 00:08:44.282 "job": "raid_bdev1", 00:08:44.282 "core_mask": "0x1", 00:08:44.282 "workload": "randrw", 00:08:44.282 "percentage": 50, 00:08:44.282 "status": "finished", 00:08:44.282 "queue_depth": 1, 00:08:44.282 "io_size": 131072, 00:08:44.282 "runtime": 1.377592, 00:08:44.282 "iops": 15130.023983879117, 00:08:44.282 "mibps": 1891.2529979848896, 00:08:44.282 "io_failed": 1, 00:08:44.282 "io_timeout": 0, 00:08:44.282 "avg_latency_us": 91.65833176208541, 00:08:44.282 "min_latency_us": 26.382532751091702, 00:08:44.282 "max_latency_us": 1538.235807860262 00:08:44.282 } 00:08:44.282 ], 00:08:44.282 "core_count": 1 00:08:44.282 } 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67151 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67151 ']' 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67151 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67151 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67151' 00:08:44.282 killing process with pid 67151 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67151 00:08:44.282 [2024-12-12 19:37:26.899499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.282 19:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67151 00:08:44.541 [2024-12-12 19:37:27.133838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AUhmkN0xYK 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:45.480 00:08:45.480 real 0m4.469s 00:08:45.480 user 0m5.311s 00:08:45.480 sys 0m0.509s 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.480 ************************************ 00:08:45.480 END TEST raid_write_error_test 00:08:45.480 ************************************ 00:08:45.480 19:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 19:37:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:45.740 19:37:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:45.740 19:37:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.740 19:37:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.740 19:37:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 ************************************ 00:08:45.740 START TEST raid_state_function_test 00:08:45.740 ************************************ 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67295 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67295' 00:08:45.740 Process raid pid: 67295 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67295 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67295 ']' 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.740 19:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 [2024-12-12 19:37:28.463630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:45.740 [2024-12-12 19:37:28.463744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.000 [2024-12-12 19:37:28.639753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.000 [2024-12-12 19:37:28.756766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.259 [2024-12-12 19:37:28.963885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.259 [2024-12-12 19:37:28.963933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.517 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.517 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:46.517 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.518 [2024-12-12 19:37:29.323087] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.518 [2024-12-12 19:37:29.323386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.518 [2024-12-12 19:37:29.323409] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.518 [2024-12-12 19:37:29.323458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.518 [2024-12-12 19:37:29.323467] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.518 [2024-12-12 19:37:29.323512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.518 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.777 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.777 "name": "Existed_Raid", 00:08:46.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.777 "strip_size_kb": 64, 00:08:46.777 "state": "configuring", 00:08:46.777 "raid_level": "concat", 00:08:46.777 "superblock": false, 00:08:46.777 "num_base_bdevs": 3, 00:08:46.777 "num_base_bdevs_discovered": 0, 00:08:46.777 "num_base_bdevs_operational": 3, 00:08:46.777 "base_bdevs_list": [ 00:08:46.777 { 00:08:46.777 "name": "BaseBdev1", 00:08:46.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.777 "is_configured": false, 00:08:46.777 "data_offset": 0, 00:08:46.777 "data_size": 0 00:08:46.777 }, 00:08:46.777 { 00:08:46.777 "name": "BaseBdev2", 00:08:46.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.777 "is_configured": false, 00:08:46.777 "data_offset": 0, 00:08:46.777 "data_size": 0 00:08:46.777 }, 00:08:46.777 { 00:08:46.777 "name": "BaseBdev3", 00:08:46.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.777 "is_configured": false, 00:08:46.777 "data_offset": 0, 00:08:46.777 "data_size": 0 00:08:46.777 } 00:08:46.777 ] 00:08:46.777 }' 00:08:46.777 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.777 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 [2024-12-12 19:37:29.786243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.037 [2024-12-12 19:37:29.786280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 [2024-12-12 19:37:29.798220] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.037 [2024-12-12 19:37:29.798637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.037 [2024-12-12 19:37:29.798695] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.037 [2024-12-12 19:37:29.798831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.037 [2024-12-12 19:37:29.798867] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.037 [2024-12-12 19:37:29.798962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 [2024-12-12 19:37:29.844920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.037 BaseBdev1 00:08:47.037 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 [ 00:08:47.038 { 00:08:47.038 "name": "BaseBdev1", 00:08:47.038 "aliases": [ 00:08:47.038 "c44afd9e-7cab-4119-9c2d-a04c046354c9" 00:08:47.038 ], 00:08:47.038 "product_name": "Malloc disk", 00:08:47.038 "block_size": 512, 00:08:47.038 "num_blocks": 65536, 00:08:47.038 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:47.038 "assigned_rate_limits": { 00:08:47.038 "rw_ios_per_sec": 0, 00:08:47.038 "rw_mbytes_per_sec": 0, 00:08:47.038 "r_mbytes_per_sec": 0, 00:08:47.038 "w_mbytes_per_sec": 0 00:08:47.038 }, 00:08:47.038 "claimed": true, 00:08:47.038 "claim_type": "exclusive_write", 00:08:47.038 "zoned": false, 00:08:47.038 "supported_io_types": { 00:08:47.038 "read": true, 00:08:47.038 "write": true, 00:08:47.038 "unmap": true, 00:08:47.038 "flush": true, 00:08:47.038 "reset": true, 00:08:47.038 "nvme_admin": false, 00:08:47.038 "nvme_io": false, 00:08:47.038 "nvme_io_md": false, 00:08:47.038 "write_zeroes": true, 00:08:47.038 "zcopy": true, 00:08:47.038 "get_zone_info": false, 00:08:47.038 "zone_management": false, 00:08:47.038 "zone_append": false, 00:08:47.038 "compare": false, 00:08:47.038 "compare_and_write": false, 00:08:47.038 "abort": true, 00:08:47.038 "seek_hole": false, 00:08:47.038 "seek_data": false, 00:08:47.038 "copy": true, 00:08:47.038 "nvme_iov_md": false 00:08:47.038 }, 00:08:47.038 "memory_domains": [ 00:08:47.038 { 00:08:47.038 "dma_device_id": "system", 00:08:47.038 "dma_device_type": 1 00:08:47.038 }, 00:08:47.038 { 00:08:47.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.038 "dma_device_type": 2 00:08:47.038 } 00:08:47.038 ], 00:08:47.298 "driver_specific": {} 00:08:47.298 } 00:08:47.298 ] 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.298 "name": "Existed_Raid", 00:08:47.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.298 "strip_size_kb": 64, 00:08:47.298 "state": "configuring", 00:08:47.298 "raid_level": "concat", 00:08:47.298 "superblock": false, 00:08:47.298 "num_base_bdevs": 3, 00:08:47.298 "num_base_bdevs_discovered": 1, 00:08:47.298 "num_base_bdevs_operational": 3, 00:08:47.298 "base_bdevs_list": [ 00:08:47.298 { 00:08:47.298 "name": "BaseBdev1", 00:08:47.298 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:47.298 "is_configured": true, 00:08:47.298 "data_offset": 0, 00:08:47.298 "data_size": 65536 00:08:47.298 }, 00:08:47.298 { 00:08:47.298 "name": "BaseBdev2", 00:08:47.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.298 "is_configured": false, 00:08:47.298 "data_offset": 0, 00:08:47.298 "data_size": 0 00:08:47.298 }, 00:08:47.298 { 00:08:47.298 "name": "BaseBdev3", 00:08:47.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.298 "is_configured": false, 00:08:47.298 "data_offset": 0, 00:08:47.298 "data_size": 0 00:08:47.298 } 00:08:47.298 ] 00:08:47.298 }' 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.298 19:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.557 [2024-12-12 19:37:30.340147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.557 [2024-12-12 19:37:30.340207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.557 [2024-12-12 19:37:30.348189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.557 [2024-12-12 19:37:30.350177] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.557 [2024-12-12 19:37:30.350507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.557 [2024-12-12 19:37:30.350523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.557 [2024-12-12 19:37:30.350603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.557 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.558 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.817 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.817 "name": "Existed_Raid", 00:08:47.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.817 "strip_size_kb": 64, 00:08:47.817 "state": "configuring", 00:08:47.817 "raid_level": "concat", 00:08:47.817 "superblock": false, 00:08:47.817 "num_base_bdevs": 3, 00:08:47.817 "num_base_bdevs_discovered": 1, 00:08:47.817 "num_base_bdevs_operational": 3, 00:08:47.817 "base_bdevs_list": [ 00:08:47.817 { 00:08:47.817 "name": "BaseBdev1", 00:08:47.817 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:47.817 "is_configured": true, 00:08:47.817 "data_offset": 0, 00:08:47.817 "data_size": 65536 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "name": "BaseBdev2", 00:08:47.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.817 "is_configured": false, 00:08:47.817 "data_offset": 0, 00:08:47.817 "data_size": 0 00:08:47.817 }, 00:08:47.817 { 00:08:47.818 "name": "BaseBdev3", 00:08:47.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.818 "is_configured": false, 00:08:47.818 "data_offset": 0, 00:08:47.818 "data_size": 0 00:08:47.818 } 00:08:47.818 ] 00:08:47.818 }' 00:08:47.818 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.818 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 [2024-12-12 19:37:30.865215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.077 BaseBdev2 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.077 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 [ 00:08:48.077 { 00:08:48.077 "name": "BaseBdev2", 00:08:48.077 "aliases": [ 00:08:48.077 "3b848543-58c7-45fc-af32-de393ff49c1a" 00:08:48.077 ], 00:08:48.077 "product_name": "Malloc disk", 00:08:48.077 "block_size": 512, 00:08:48.077 "num_blocks": 65536, 00:08:48.077 "uuid": "3b848543-58c7-45fc-af32-de393ff49c1a", 00:08:48.077 "assigned_rate_limits": { 00:08:48.077 "rw_ios_per_sec": 0, 00:08:48.077 "rw_mbytes_per_sec": 0, 00:08:48.077 "r_mbytes_per_sec": 0, 00:08:48.077 "w_mbytes_per_sec": 0 00:08:48.077 }, 00:08:48.077 "claimed": true, 00:08:48.077 "claim_type": "exclusive_write", 00:08:48.077 "zoned": false, 00:08:48.077 "supported_io_types": { 00:08:48.077 "read": true, 00:08:48.077 "write": true, 00:08:48.077 "unmap": true, 00:08:48.077 "flush": true, 00:08:48.077 "reset": true, 00:08:48.077 "nvme_admin": false, 00:08:48.077 "nvme_io": false, 00:08:48.077 "nvme_io_md": false, 00:08:48.077 "write_zeroes": true, 00:08:48.077 "zcopy": true, 00:08:48.077 "get_zone_info": false, 00:08:48.077 "zone_management": false, 00:08:48.077 "zone_append": false, 00:08:48.077 "compare": false, 00:08:48.077 "compare_and_write": false, 00:08:48.077 "abort": true, 00:08:48.077 "seek_hole": false, 00:08:48.077 "seek_data": false, 00:08:48.077 "copy": true, 00:08:48.077 "nvme_iov_md": false 00:08:48.077 }, 00:08:48.077 "memory_domains": [ 00:08:48.077 { 00:08:48.077 "dma_device_id": "system", 00:08:48.077 "dma_device_type": 1 00:08:48.077 }, 00:08:48.077 { 00:08:48.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.078 "dma_device_type": 2 00:08:48.078 } 00:08:48.078 ], 00:08:48.078 "driver_specific": {} 00:08:48.078 } 00:08:48.078 ] 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.078 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.338 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.338 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.338 "name": "Existed_Raid", 00:08:48.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.338 "strip_size_kb": 64, 00:08:48.338 "state": "configuring", 00:08:48.338 "raid_level": "concat", 00:08:48.338 "superblock": false, 00:08:48.338 "num_base_bdevs": 3, 00:08:48.338 "num_base_bdevs_discovered": 2, 00:08:48.338 "num_base_bdevs_operational": 3, 00:08:48.338 "base_bdevs_list": [ 00:08:48.338 { 00:08:48.338 "name": "BaseBdev1", 00:08:48.338 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:48.338 "is_configured": true, 00:08:48.338 "data_offset": 0, 00:08:48.338 "data_size": 65536 00:08:48.338 }, 00:08:48.338 { 00:08:48.338 "name": "BaseBdev2", 00:08:48.338 "uuid": "3b848543-58c7-45fc-af32-de393ff49c1a", 00:08:48.338 "is_configured": true, 00:08:48.338 "data_offset": 0, 00:08:48.338 "data_size": 65536 00:08:48.338 }, 00:08:48.338 { 00:08:48.338 "name": "BaseBdev3", 00:08:48.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.338 "is_configured": false, 00:08:48.338 "data_offset": 0, 00:08:48.338 "data_size": 0 00:08:48.338 } 00:08:48.338 ] 00:08:48.338 }' 00:08:48.338 19:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.338 19:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.598 [2024-12-12 19:37:31.419696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.598 [2024-12-12 19:37:31.419738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.598 [2024-12-12 19:37:31.419750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:48.598 [2024-12-12 19:37:31.419995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.598 [2024-12-12 19:37:31.420160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.598 [2024-12-12 19:37:31.420170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.598 [2024-12-12 19:37:31.420437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.598 BaseBdev3 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.598 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.857 [ 00:08:48.857 { 00:08:48.857 "name": "BaseBdev3", 00:08:48.857 "aliases": [ 00:08:48.857 "bfb6c8bb-0d5c-4e21-9ccf-fcf19267d4fe" 00:08:48.857 ], 00:08:48.857 "product_name": "Malloc disk", 00:08:48.857 "block_size": 512, 00:08:48.857 "num_blocks": 65536, 00:08:48.857 "uuid": "bfb6c8bb-0d5c-4e21-9ccf-fcf19267d4fe", 00:08:48.857 "assigned_rate_limits": { 00:08:48.857 "rw_ios_per_sec": 0, 00:08:48.857 "rw_mbytes_per_sec": 0, 00:08:48.857 "r_mbytes_per_sec": 0, 00:08:48.857 "w_mbytes_per_sec": 0 00:08:48.857 }, 00:08:48.857 "claimed": true, 00:08:48.857 "claim_type": "exclusive_write", 00:08:48.857 "zoned": false, 00:08:48.857 "supported_io_types": { 00:08:48.857 "read": true, 00:08:48.857 "write": true, 00:08:48.857 "unmap": true, 00:08:48.857 "flush": true, 00:08:48.857 "reset": true, 00:08:48.857 "nvme_admin": false, 00:08:48.857 "nvme_io": false, 00:08:48.857 "nvme_io_md": false, 00:08:48.857 "write_zeroes": true, 00:08:48.857 "zcopy": true, 00:08:48.857 "get_zone_info": false, 00:08:48.857 "zone_management": false, 00:08:48.857 "zone_append": false, 00:08:48.857 "compare": false, 00:08:48.857 "compare_and_write": false, 00:08:48.857 "abort": true, 00:08:48.857 "seek_hole": false, 00:08:48.857 "seek_data": false, 00:08:48.857 "copy": true, 00:08:48.857 "nvme_iov_md": false 00:08:48.857 }, 00:08:48.857 "memory_domains": [ 00:08:48.857 { 00:08:48.857 "dma_device_id": "system", 00:08:48.857 "dma_device_type": 1 00:08:48.857 }, 00:08:48.857 { 00:08:48.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.857 "dma_device_type": 2 00:08:48.857 } 00:08:48.857 ], 00:08:48.857 "driver_specific": {} 00:08:48.857 } 00:08:48.857 ] 00:08:48.857 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.857 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.857 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.858 "name": "Existed_Raid", 00:08:48.858 "uuid": "9721ab61-58e7-4bee-be3d-8fbe488c713f", 00:08:48.858 "strip_size_kb": 64, 00:08:48.858 "state": "online", 00:08:48.858 "raid_level": "concat", 00:08:48.858 "superblock": false, 00:08:48.858 "num_base_bdevs": 3, 00:08:48.858 "num_base_bdevs_discovered": 3, 00:08:48.858 "num_base_bdevs_operational": 3, 00:08:48.858 "base_bdevs_list": [ 00:08:48.858 { 00:08:48.858 "name": "BaseBdev1", 00:08:48.858 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:48.858 "is_configured": true, 00:08:48.858 "data_offset": 0, 00:08:48.858 "data_size": 65536 00:08:48.858 }, 00:08:48.858 { 00:08:48.858 "name": "BaseBdev2", 00:08:48.858 "uuid": "3b848543-58c7-45fc-af32-de393ff49c1a", 00:08:48.858 "is_configured": true, 00:08:48.858 "data_offset": 0, 00:08:48.858 "data_size": 65536 00:08:48.858 }, 00:08:48.858 { 00:08:48.858 "name": "BaseBdev3", 00:08:48.858 "uuid": "bfb6c8bb-0d5c-4e21-9ccf-fcf19267d4fe", 00:08:48.858 "is_configured": true, 00:08:48.858 "data_offset": 0, 00:08:48.858 "data_size": 65536 00:08:48.858 } 00:08:48.858 ] 00:08:48.858 }' 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.858 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.116 [2024-12-12 19:37:31.895244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.116 "name": "Existed_Raid", 00:08:49.116 "aliases": [ 00:08:49.116 "9721ab61-58e7-4bee-be3d-8fbe488c713f" 00:08:49.116 ], 00:08:49.116 "product_name": "Raid Volume", 00:08:49.116 "block_size": 512, 00:08:49.116 "num_blocks": 196608, 00:08:49.116 "uuid": "9721ab61-58e7-4bee-be3d-8fbe488c713f", 00:08:49.116 "assigned_rate_limits": { 00:08:49.116 "rw_ios_per_sec": 0, 00:08:49.116 "rw_mbytes_per_sec": 0, 00:08:49.116 "r_mbytes_per_sec": 0, 00:08:49.116 "w_mbytes_per_sec": 0 00:08:49.116 }, 00:08:49.116 "claimed": false, 00:08:49.116 "zoned": false, 00:08:49.116 "supported_io_types": { 00:08:49.116 "read": true, 00:08:49.116 "write": true, 00:08:49.116 "unmap": true, 00:08:49.116 "flush": true, 00:08:49.116 "reset": true, 00:08:49.116 "nvme_admin": false, 00:08:49.116 "nvme_io": false, 00:08:49.116 "nvme_io_md": false, 00:08:49.116 "write_zeroes": true, 00:08:49.116 "zcopy": false, 00:08:49.116 "get_zone_info": false, 00:08:49.116 "zone_management": false, 00:08:49.116 "zone_append": false, 00:08:49.116 "compare": false, 00:08:49.116 "compare_and_write": false, 00:08:49.116 "abort": false, 00:08:49.116 "seek_hole": false, 00:08:49.116 "seek_data": false, 00:08:49.116 "copy": false, 00:08:49.116 "nvme_iov_md": false 00:08:49.116 }, 00:08:49.116 "memory_domains": [ 00:08:49.116 { 00:08:49.116 "dma_device_id": "system", 00:08:49.116 "dma_device_type": 1 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.116 "dma_device_type": 2 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "dma_device_id": "system", 00:08:49.116 "dma_device_type": 1 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.116 "dma_device_type": 2 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "dma_device_id": "system", 00:08:49.116 "dma_device_type": 1 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.116 "dma_device_type": 2 00:08:49.116 } 00:08:49.116 ], 00:08:49.116 "driver_specific": { 00:08:49.116 "raid": { 00:08:49.116 "uuid": "9721ab61-58e7-4bee-be3d-8fbe488c713f", 00:08:49.116 "strip_size_kb": 64, 00:08:49.116 "state": "online", 00:08:49.116 "raid_level": "concat", 00:08:49.116 "superblock": false, 00:08:49.116 "num_base_bdevs": 3, 00:08:49.116 "num_base_bdevs_discovered": 3, 00:08:49.116 "num_base_bdevs_operational": 3, 00:08:49.116 "base_bdevs_list": [ 00:08:49.116 { 00:08:49.116 "name": "BaseBdev1", 00:08:49.116 "uuid": "c44afd9e-7cab-4119-9c2d-a04c046354c9", 00:08:49.116 "is_configured": true, 00:08:49.116 "data_offset": 0, 00:08:49.116 "data_size": 65536 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "name": "BaseBdev2", 00:08:49.116 "uuid": "3b848543-58c7-45fc-af32-de393ff49c1a", 00:08:49.116 "is_configured": true, 00:08:49.116 "data_offset": 0, 00:08:49.116 "data_size": 65536 00:08:49.116 }, 00:08:49.116 { 00:08:49.116 "name": "BaseBdev3", 00:08:49.116 "uuid": "bfb6c8bb-0d5c-4e21-9ccf-fcf19267d4fe", 00:08:49.116 "is_configured": true, 00:08:49.116 "data_offset": 0, 00:08:49.116 "data_size": 65536 00:08:49.116 } 00:08:49.116 ] 00:08:49.116 } 00:08:49.116 } 00:08:49.116 }' 00:08:49.116 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.376 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.376 BaseBdev2 00:08:49.376 BaseBdev3' 00:08:49.376 19:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.376 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 [2024-12-12 19:37:32.170531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.376 [2024-12-12 19:37:32.170578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.376 [2024-12-12 19:37:32.170631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.636 "name": "Existed_Raid", 00:08:49.636 "uuid": "9721ab61-58e7-4bee-be3d-8fbe488c713f", 00:08:49.636 "strip_size_kb": 64, 00:08:49.636 "state": "offline", 00:08:49.636 "raid_level": "concat", 00:08:49.636 "superblock": false, 00:08:49.636 "num_base_bdevs": 3, 00:08:49.636 "num_base_bdevs_discovered": 2, 00:08:49.636 "num_base_bdevs_operational": 2, 00:08:49.636 "base_bdevs_list": [ 00:08:49.636 { 00:08:49.636 "name": null, 00:08:49.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.636 "is_configured": false, 00:08:49.636 "data_offset": 0, 00:08:49.636 "data_size": 65536 00:08:49.636 }, 00:08:49.636 { 00:08:49.636 "name": "BaseBdev2", 00:08:49.636 "uuid": "3b848543-58c7-45fc-af32-de393ff49c1a", 00:08:49.636 "is_configured": true, 00:08:49.636 "data_offset": 0, 00:08:49.636 "data_size": 65536 00:08:49.636 }, 00:08:49.636 { 00:08:49.636 "name": "BaseBdev3", 00:08:49.636 "uuid": "bfb6c8bb-0d5c-4e21-9ccf-fcf19267d4fe", 00:08:49.636 "is_configured": true, 00:08:49.636 "data_offset": 0, 00:08:49.636 "data_size": 65536 00:08:49.636 } 00:08:49.636 ] 00:08:49.636 }' 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.636 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.896 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 [2024-12-12 19:37:32.771511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.156 19:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 [2024-12-12 19:37:32.912086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.156 [2024-12-12 19:37:32.912193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.416 BaseBdev2 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.416 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.416 [ 00:08:50.416 { 00:08:50.416 "name": "BaseBdev2", 00:08:50.416 "aliases": [ 00:08:50.416 "27395e9a-4845-48d1-9ac7-47f3699fca15" 00:08:50.416 ], 00:08:50.416 "product_name": "Malloc disk", 00:08:50.416 "block_size": 512, 00:08:50.417 "num_blocks": 65536, 00:08:50.417 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:50.417 "assigned_rate_limits": { 00:08:50.417 "rw_ios_per_sec": 0, 00:08:50.417 "rw_mbytes_per_sec": 0, 00:08:50.417 "r_mbytes_per_sec": 0, 00:08:50.417 "w_mbytes_per_sec": 0 00:08:50.417 }, 00:08:50.417 "claimed": false, 00:08:50.417 "zoned": false, 00:08:50.417 "supported_io_types": { 00:08:50.417 "read": true, 00:08:50.417 "write": true, 00:08:50.417 "unmap": true, 00:08:50.417 "flush": true, 00:08:50.417 "reset": true, 00:08:50.417 "nvme_admin": false, 00:08:50.417 "nvme_io": false, 00:08:50.417 "nvme_io_md": false, 00:08:50.417 "write_zeroes": true, 00:08:50.417 "zcopy": true, 00:08:50.417 "get_zone_info": false, 00:08:50.417 "zone_management": false, 00:08:50.417 "zone_append": false, 00:08:50.417 "compare": false, 00:08:50.417 "compare_and_write": false, 00:08:50.417 "abort": true, 00:08:50.417 "seek_hole": false, 00:08:50.417 "seek_data": false, 00:08:50.417 "copy": true, 00:08:50.417 "nvme_iov_md": false 00:08:50.417 }, 00:08:50.417 "memory_domains": [ 00:08:50.417 { 00:08:50.417 "dma_device_id": "system", 00:08:50.417 "dma_device_type": 1 00:08:50.417 }, 00:08:50.417 { 00:08:50.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.417 "dma_device_type": 2 00:08:50.417 } 00:08:50.417 ], 00:08:50.417 "driver_specific": {} 00:08:50.417 } 00:08:50.417 ] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.417 BaseBdev3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.417 [ 00:08:50.417 { 00:08:50.417 "name": "BaseBdev3", 00:08:50.417 "aliases": [ 00:08:50.417 "001b03e9-adb9-44e1-afe3-00c3dddec45a" 00:08:50.417 ], 00:08:50.417 "product_name": "Malloc disk", 00:08:50.417 "block_size": 512, 00:08:50.417 "num_blocks": 65536, 00:08:50.417 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:50.417 "assigned_rate_limits": { 00:08:50.417 "rw_ios_per_sec": 0, 00:08:50.417 "rw_mbytes_per_sec": 0, 00:08:50.417 "r_mbytes_per_sec": 0, 00:08:50.417 "w_mbytes_per_sec": 0 00:08:50.417 }, 00:08:50.417 "claimed": false, 00:08:50.417 "zoned": false, 00:08:50.417 "supported_io_types": { 00:08:50.417 "read": true, 00:08:50.417 "write": true, 00:08:50.417 "unmap": true, 00:08:50.417 "flush": true, 00:08:50.417 "reset": true, 00:08:50.417 "nvme_admin": false, 00:08:50.417 "nvme_io": false, 00:08:50.417 "nvme_io_md": false, 00:08:50.417 "write_zeroes": true, 00:08:50.417 "zcopy": true, 00:08:50.417 "get_zone_info": false, 00:08:50.417 "zone_management": false, 00:08:50.417 "zone_append": false, 00:08:50.417 "compare": false, 00:08:50.417 "compare_and_write": false, 00:08:50.417 "abort": true, 00:08:50.417 "seek_hole": false, 00:08:50.417 "seek_data": false, 00:08:50.417 "copy": true, 00:08:50.417 "nvme_iov_md": false 00:08:50.417 }, 00:08:50.417 "memory_domains": [ 00:08:50.417 { 00:08:50.417 "dma_device_id": "system", 00:08:50.417 "dma_device_type": 1 00:08:50.417 }, 00:08:50.417 { 00:08:50.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.417 "dma_device_type": 2 00:08:50.417 } 00:08:50.417 ], 00:08:50.417 "driver_specific": {} 00:08:50.417 } 00:08:50.417 ] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.417 [2024-12-12 19:37:33.219496] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.417 [2024-12-12 19:37:33.219928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.417 [2024-12-12 19:37:33.219998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.417 [2024-12-12 19:37:33.221866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.417 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.677 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.677 "name": "Existed_Raid", 00:08:50.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.677 "strip_size_kb": 64, 00:08:50.677 "state": "configuring", 00:08:50.677 "raid_level": "concat", 00:08:50.677 "superblock": false, 00:08:50.677 "num_base_bdevs": 3, 00:08:50.677 "num_base_bdevs_discovered": 2, 00:08:50.677 "num_base_bdevs_operational": 3, 00:08:50.677 "base_bdevs_list": [ 00:08:50.677 { 00:08:50.677 "name": "BaseBdev1", 00:08:50.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.677 "is_configured": false, 00:08:50.677 "data_offset": 0, 00:08:50.677 "data_size": 0 00:08:50.677 }, 00:08:50.677 { 00:08:50.677 "name": "BaseBdev2", 00:08:50.677 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:50.677 "is_configured": true, 00:08:50.677 "data_offset": 0, 00:08:50.677 "data_size": 65536 00:08:50.677 }, 00:08:50.677 { 00:08:50.677 "name": "BaseBdev3", 00:08:50.677 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:50.677 "is_configured": true, 00:08:50.677 "data_offset": 0, 00:08:50.677 "data_size": 65536 00:08:50.677 } 00:08:50.677 ] 00:08:50.677 }' 00:08:50.677 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.677 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.943 [2024-12-12 19:37:33.674766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.943 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.943 "name": "Existed_Raid", 00:08:50.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.943 "strip_size_kb": 64, 00:08:50.943 "state": "configuring", 00:08:50.943 "raid_level": "concat", 00:08:50.943 "superblock": false, 00:08:50.943 "num_base_bdevs": 3, 00:08:50.943 "num_base_bdevs_discovered": 1, 00:08:50.943 "num_base_bdevs_operational": 3, 00:08:50.943 "base_bdevs_list": [ 00:08:50.943 { 00:08:50.943 "name": "BaseBdev1", 00:08:50.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.944 "is_configured": false, 00:08:50.944 "data_offset": 0, 00:08:50.944 "data_size": 0 00:08:50.944 }, 00:08:50.944 { 00:08:50.944 "name": null, 00:08:50.944 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:50.944 "is_configured": false, 00:08:50.944 "data_offset": 0, 00:08:50.944 "data_size": 65536 00:08:50.944 }, 00:08:50.944 { 00:08:50.944 "name": "BaseBdev3", 00:08:50.944 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:50.944 "is_configured": true, 00:08:50.944 "data_offset": 0, 00:08:50.944 "data_size": 65536 00:08:50.944 } 00:08:50.944 ] 00:08:50.944 }' 00:08:50.944 19:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.944 19:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.526 [2024-12-12 19:37:34.211316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.526 BaseBdev1 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.526 [ 00:08:51.526 { 00:08:51.526 "name": "BaseBdev1", 00:08:51.526 "aliases": [ 00:08:51.526 "49450cee-fa5e-4c55-a001-2925f29b76d4" 00:08:51.526 ], 00:08:51.526 "product_name": "Malloc disk", 00:08:51.526 "block_size": 512, 00:08:51.526 "num_blocks": 65536, 00:08:51.526 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:51.526 "assigned_rate_limits": { 00:08:51.526 "rw_ios_per_sec": 0, 00:08:51.526 "rw_mbytes_per_sec": 0, 00:08:51.526 "r_mbytes_per_sec": 0, 00:08:51.526 "w_mbytes_per_sec": 0 00:08:51.526 }, 00:08:51.526 "claimed": true, 00:08:51.526 "claim_type": "exclusive_write", 00:08:51.526 "zoned": false, 00:08:51.526 "supported_io_types": { 00:08:51.526 "read": true, 00:08:51.526 "write": true, 00:08:51.526 "unmap": true, 00:08:51.526 "flush": true, 00:08:51.526 "reset": true, 00:08:51.526 "nvme_admin": false, 00:08:51.526 "nvme_io": false, 00:08:51.526 "nvme_io_md": false, 00:08:51.526 "write_zeroes": true, 00:08:51.526 "zcopy": true, 00:08:51.526 "get_zone_info": false, 00:08:51.526 "zone_management": false, 00:08:51.526 "zone_append": false, 00:08:51.526 "compare": false, 00:08:51.526 "compare_and_write": false, 00:08:51.526 "abort": true, 00:08:51.526 "seek_hole": false, 00:08:51.526 "seek_data": false, 00:08:51.526 "copy": true, 00:08:51.526 "nvme_iov_md": false 00:08:51.526 }, 00:08:51.526 "memory_domains": [ 00:08:51.526 { 00:08:51.526 "dma_device_id": "system", 00:08:51.526 "dma_device_type": 1 00:08:51.526 }, 00:08:51.526 { 00:08:51.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.526 "dma_device_type": 2 00:08:51.526 } 00:08:51.526 ], 00:08:51.526 "driver_specific": {} 00:08:51.526 } 00:08:51.526 ] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.526 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.527 "name": "Existed_Raid", 00:08:51.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.527 "strip_size_kb": 64, 00:08:51.527 "state": "configuring", 00:08:51.527 "raid_level": "concat", 00:08:51.527 "superblock": false, 00:08:51.527 "num_base_bdevs": 3, 00:08:51.527 "num_base_bdevs_discovered": 2, 00:08:51.527 "num_base_bdevs_operational": 3, 00:08:51.527 "base_bdevs_list": [ 00:08:51.527 { 00:08:51.527 "name": "BaseBdev1", 00:08:51.527 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:51.527 "is_configured": true, 00:08:51.527 "data_offset": 0, 00:08:51.527 "data_size": 65536 00:08:51.527 }, 00:08:51.527 { 00:08:51.527 "name": null, 00:08:51.527 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:51.527 "is_configured": false, 00:08:51.527 "data_offset": 0, 00:08:51.527 "data_size": 65536 00:08:51.527 }, 00:08:51.527 { 00:08:51.527 "name": "BaseBdev3", 00:08:51.527 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:51.527 "is_configured": true, 00:08:51.527 "data_offset": 0, 00:08:51.527 "data_size": 65536 00:08:51.527 } 00:08:51.527 ] 00:08:51.527 }' 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.527 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.096 [2024-12-12 19:37:34.730491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.096 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.096 "name": "Existed_Raid", 00:08:52.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.096 "strip_size_kb": 64, 00:08:52.096 "state": "configuring", 00:08:52.096 "raid_level": "concat", 00:08:52.096 "superblock": false, 00:08:52.096 "num_base_bdevs": 3, 00:08:52.096 "num_base_bdevs_discovered": 1, 00:08:52.096 "num_base_bdevs_operational": 3, 00:08:52.096 "base_bdevs_list": [ 00:08:52.096 { 00:08:52.096 "name": "BaseBdev1", 00:08:52.096 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:52.096 "is_configured": true, 00:08:52.096 "data_offset": 0, 00:08:52.096 "data_size": 65536 00:08:52.096 }, 00:08:52.096 { 00:08:52.096 "name": null, 00:08:52.096 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:52.096 "is_configured": false, 00:08:52.096 "data_offset": 0, 00:08:52.096 "data_size": 65536 00:08:52.096 }, 00:08:52.096 { 00:08:52.096 "name": null, 00:08:52.096 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:52.097 "is_configured": false, 00:08:52.097 "data_offset": 0, 00:08:52.097 "data_size": 65536 00:08:52.097 } 00:08:52.097 ] 00:08:52.097 }' 00:08:52.097 19:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.097 19:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.664 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.665 [2024-12-12 19:37:35.241721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.665 "name": "Existed_Raid", 00:08:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.665 "strip_size_kb": 64, 00:08:52.665 "state": "configuring", 00:08:52.665 "raid_level": "concat", 00:08:52.665 "superblock": false, 00:08:52.665 "num_base_bdevs": 3, 00:08:52.665 "num_base_bdevs_discovered": 2, 00:08:52.665 "num_base_bdevs_operational": 3, 00:08:52.665 "base_bdevs_list": [ 00:08:52.665 { 00:08:52.665 "name": "BaseBdev1", 00:08:52.665 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:52.665 "is_configured": true, 00:08:52.665 "data_offset": 0, 00:08:52.665 "data_size": 65536 00:08:52.665 }, 00:08:52.665 { 00:08:52.665 "name": null, 00:08:52.665 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:52.665 "is_configured": false, 00:08:52.665 "data_offset": 0, 00:08:52.665 "data_size": 65536 00:08:52.665 }, 00:08:52.665 { 00:08:52.665 "name": "BaseBdev3", 00:08:52.665 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:52.665 "is_configured": true, 00:08:52.665 "data_offset": 0, 00:08:52.665 "data_size": 65536 00:08:52.665 } 00:08:52.665 ] 00:08:52.665 }' 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.665 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.924 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.183 [2024-12-12 19:37:35.772834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.183 "name": "Existed_Raid", 00:08:53.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.183 "strip_size_kb": 64, 00:08:53.183 "state": "configuring", 00:08:53.183 "raid_level": "concat", 00:08:53.183 "superblock": false, 00:08:53.183 "num_base_bdevs": 3, 00:08:53.183 "num_base_bdevs_discovered": 1, 00:08:53.183 "num_base_bdevs_operational": 3, 00:08:53.183 "base_bdevs_list": [ 00:08:53.183 { 00:08:53.183 "name": null, 00:08:53.183 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:53.183 "is_configured": false, 00:08:53.183 "data_offset": 0, 00:08:53.183 "data_size": 65536 00:08:53.183 }, 00:08:53.183 { 00:08:53.183 "name": null, 00:08:53.183 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:53.183 "is_configured": false, 00:08:53.183 "data_offset": 0, 00:08:53.183 "data_size": 65536 00:08:53.183 }, 00:08:53.183 { 00:08:53.183 "name": "BaseBdev3", 00:08:53.183 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:53.183 "is_configured": true, 00:08:53.183 "data_offset": 0, 00:08:53.183 "data_size": 65536 00:08:53.183 } 00:08:53.183 ] 00:08:53.183 }' 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.183 19:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.751 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.752 [2024-12-12 19:37:36.391948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.752 "name": "Existed_Raid", 00:08:53.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.752 "strip_size_kb": 64, 00:08:53.752 "state": "configuring", 00:08:53.752 "raid_level": "concat", 00:08:53.752 "superblock": false, 00:08:53.752 "num_base_bdevs": 3, 00:08:53.752 "num_base_bdevs_discovered": 2, 00:08:53.752 "num_base_bdevs_operational": 3, 00:08:53.752 "base_bdevs_list": [ 00:08:53.752 { 00:08:53.752 "name": null, 00:08:53.752 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:53.752 "is_configured": false, 00:08:53.752 "data_offset": 0, 00:08:53.752 "data_size": 65536 00:08:53.752 }, 00:08:53.752 { 00:08:53.752 "name": "BaseBdev2", 00:08:53.752 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:53.752 "is_configured": true, 00:08:53.752 "data_offset": 0, 00:08:53.752 "data_size": 65536 00:08:53.752 }, 00:08:53.752 { 00:08:53.752 "name": "BaseBdev3", 00:08:53.752 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:53.752 "is_configured": true, 00:08:53.752 "data_offset": 0, 00:08:53.752 "data_size": 65536 00:08:53.752 } 00:08:53.752 ] 00:08:53.752 }' 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.752 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.011 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.011 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.011 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.011 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.011 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49450cee-fa5e-4c55-a001-2925f29b76d4 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 [2024-12-12 19:37:36.947244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.270 [2024-12-12 19:37:36.947303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.270 [2024-12-12 19:37:36.947314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:54.270 [2024-12-12 19:37:36.947622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.270 [2024-12-12 19:37:36.947792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.270 [2024-12-12 19:37:36.947802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.270 [2024-12-12 19:37:36.948129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.270 NewBaseBdev 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 [ 00:08:54.270 { 00:08:54.270 "name": "NewBaseBdev", 00:08:54.270 "aliases": [ 00:08:54.270 "49450cee-fa5e-4c55-a001-2925f29b76d4" 00:08:54.270 ], 00:08:54.270 "product_name": "Malloc disk", 00:08:54.270 "block_size": 512, 00:08:54.270 "num_blocks": 65536, 00:08:54.270 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:54.270 "assigned_rate_limits": { 00:08:54.270 "rw_ios_per_sec": 0, 00:08:54.270 "rw_mbytes_per_sec": 0, 00:08:54.270 "r_mbytes_per_sec": 0, 00:08:54.270 "w_mbytes_per_sec": 0 00:08:54.270 }, 00:08:54.271 "claimed": true, 00:08:54.271 "claim_type": "exclusive_write", 00:08:54.271 "zoned": false, 00:08:54.271 "supported_io_types": { 00:08:54.271 "read": true, 00:08:54.271 "write": true, 00:08:54.271 "unmap": true, 00:08:54.271 "flush": true, 00:08:54.271 "reset": true, 00:08:54.271 "nvme_admin": false, 00:08:54.271 "nvme_io": false, 00:08:54.271 "nvme_io_md": false, 00:08:54.271 "write_zeroes": true, 00:08:54.271 "zcopy": true, 00:08:54.271 "get_zone_info": false, 00:08:54.271 "zone_management": false, 00:08:54.271 "zone_append": false, 00:08:54.271 "compare": false, 00:08:54.271 "compare_and_write": false, 00:08:54.271 "abort": true, 00:08:54.271 "seek_hole": false, 00:08:54.271 "seek_data": false, 00:08:54.271 "copy": true, 00:08:54.271 "nvme_iov_md": false 00:08:54.271 }, 00:08:54.271 "memory_domains": [ 00:08:54.271 { 00:08:54.271 "dma_device_id": "system", 00:08:54.271 "dma_device_type": 1 00:08:54.271 }, 00:08:54.271 { 00:08:54.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.271 "dma_device_type": 2 00:08:54.271 } 00:08:54.271 ], 00:08:54.271 "driver_specific": {} 00:08:54.271 } 00:08:54.271 ] 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.271 19:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.271 "name": "Existed_Raid", 00:08:54.271 "uuid": "66efe2c0-ccf8-459f-9c06-4ef9bc18e65c", 00:08:54.271 "strip_size_kb": 64, 00:08:54.271 "state": "online", 00:08:54.271 "raid_level": "concat", 00:08:54.271 "superblock": false, 00:08:54.271 "num_base_bdevs": 3, 00:08:54.271 "num_base_bdevs_discovered": 3, 00:08:54.271 "num_base_bdevs_operational": 3, 00:08:54.271 "base_bdevs_list": [ 00:08:54.271 { 00:08:54.271 "name": "NewBaseBdev", 00:08:54.271 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:54.271 "is_configured": true, 00:08:54.271 "data_offset": 0, 00:08:54.271 "data_size": 65536 00:08:54.271 }, 00:08:54.271 { 00:08:54.271 "name": "BaseBdev2", 00:08:54.271 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:54.271 "is_configured": true, 00:08:54.271 "data_offset": 0, 00:08:54.271 "data_size": 65536 00:08:54.271 }, 00:08:54.271 { 00:08:54.271 "name": "BaseBdev3", 00:08:54.271 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:54.271 "is_configured": true, 00:08:54.271 "data_offset": 0, 00:08:54.271 "data_size": 65536 00:08:54.271 } 00:08:54.271 ] 00:08:54.271 }' 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.271 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 [2024-12-12 19:37:37.430851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.839 "name": "Existed_Raid", 00:08:54.839 "aliases": [ 00:08:54.839 "66efe2c0-ccf8-459f-9c06-4ef9bc18e65c" 00:08:54.839 ], 00:08:54.839 "product_name": "Raid Volume", 00:08:54.839 "block_size": 512, 00:08:54.839 "num_blocks": 196608, 00:08:54.839 "uuid": "66efe2c0-ccf8-459f-9c06-4ef9bc18e65c", 00:08:54.839 "assigned_rate_limits": { 00:08:54.839 "rw_ios_per_sec": 0, 00:08:54.839 "rw_mbytes_per_sec": 0, 00:08:54.839 "r_mbytes_per_sec": 0, 00:08:54.839 "w_mbytes_per_sec": 0 00:08:54.839 }, 00:08:54.839 "claimed": false, 00:08:54.839 "zoned": false, 00:08:54.839 "supported_io_types": { 00:08:54.839 "read": true, 00:08:54.839 "write": true, 00:08:54.839 "unmap": true, 00:08:54.839 "flush": true, 00:08:54.839 "reset": true, 00:08:54.839 "nvme_admin": false, 00:08:54.839 "nvme_io": false, 00:08:54.839 "nvme_io_md": false, 00:08:54.839 "write_zeroes": true, 00:08:54.839 "zcopy": false, 00:08:54.839 "get_zone_info": false, 00:08:54.839 "zone_management": false, 00:08:54.839 "zone_append": false, 00:08:54.839 "compare": false, 00:08:54.839 "compare_and_write": false, 00:08:54.839 "abort": false, 00:08:54.839 "seek_hole": false, 00:08:54.839 "seek_data": false, 00:08:54.839 "copy": false, 00:08:54.839 "nvme_iov_md": false 00:08:54.839 }, 00:08:54.839 "memory_domains": [ 00:08:54.839 { 00:08:54.839 "dma_device_id": "system", 00:08:54.839 "dma_device_type": 1 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.839 "dma_device_type": 2 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "dma_device_id": "system", 00:08:54.839 "dma_device_type": 1 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.839 "dma_device_type": 2 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "dma_device_id": "system", 00:08:54.839 "dma_device_type": 1 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.839 "dma_device_type": 2 00:08:54.839 } 00:08:54.839 ], 00:08:54.839 "driver_specific": { 00:08:54.839 "raid": { 00:08:54.839 "uuid": "66efe2c0-ccf8-459f-9c06-4ef9bc18e65c", 00:08:54.839 "strip_size_kb": 64, 00:08:54.839 "state": "online", 00:08:54.839 "raid_level": "concat", 00:08:54.839 "superblock": false, 00:08:54.839 "num_base_bdevs": 3, 00:08:54.839 "num_base_bdevs_discovered": 3, 00:08:54.839 "num_base_bdevs_operational": 3, 00:08:54.839 "base_bdevs_list": [ 00:08:54.839 { 00:08:54.839 "name": "NewBaseBdev", 00:08:54.839 "uuid": "49450cee-fa5e-4c55-a001-2925f29b76d4", 00:08:54.839 "is_configured": true, 00:08:54.839 "data_offset": 0, 00:08:54.839 "data_size": 65536 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "name": "BaseBdev2", 00:08:54.839 "uuid": "27395e9a-4845-48d1-9ac7-47f3699fca15", 00:08:54.839 "is_configured": true, 00:08:54.839 "data_offset": 0, 00:08:54.839 "data_size": 65536 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "name": "BaseBdev3", 00:08:54.839 "uuid": "001b03e9-adb9-44e1-afe3-00c3dddec45a", 00:08:54.839 "is_configured": true, 00:08:54.839 "data_offset": 0, 00:08:54.839 "data_size": 65536 00:08:54.839 } 00:08:54.839 ] 00:08:54.839 } 00:08:54.839 } 00:08:54.839 }' 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.839 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.839 BaseBdev2 00:08:54.840 BaseBdev3' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.840 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.099 [2024-12-12 19:37:37.718013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.099 [2024-12-12 19:37:37.718139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.099 [2024-12-12 19:37:37.718275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.099 [2024-12-12 19:37:37.718360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.099 [2024-12-12 19:37:37.718375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67295 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67295 ']' 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67295 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67295 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67295' 00:08:55.099 killing process with pid 67295 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67295 00:08:55.099 [2024-12-12 19:37:37.767630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.099 19:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67295 00:08:55.358 [2024-12-12 19:37:38.106791] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.737 00:08:56.737 real 0m11.013s 00:08:56.737 user 0m17.518s 00:08:56.737 sys 0m1.763s 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.737 ************************************ 00:08:56.737 END TEST raid_state_function_test 00:08:56.737 ************************************ 00:08:56.737 19:37:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:56.737 19:37:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.737 19:37:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.737 19:37:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.737 ************************************ 00:08:56.737 START TEST raid_state_function_test_sb 00:08:56.737 ************************************ 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67916 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67916' 00:08:56.737 Process raid pid: 67916 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67916 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67916 ']' 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.737 19:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.737 [2024-12-12 19:37:39.543140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:56.737 [2024-12-12 19:37:39.543252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.997 [2024-12-12 19:37:39.719177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.997 [2024-12-12 19:37:39.834933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.255 [2024-12-12 19:37:40.041025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.255 [2024-12-12 19:37:40.041150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.822 [2024-12-12 19:37:40.381209] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.822 [2024-12-12 19:37:40.381266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.822 [2024-12-12 19:37:40.381277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.822 [2024-12-12 19:37:40.381286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.822 [2024-12-12 19:37:40.381293] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.822 [2024-12-12 19:37:40.381301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.822 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.823 "name": "Existed_Raid", 00:08:57.823 "uuid": "8f9bc73e-c4df-4081-aa9e-2b99e92db4b1", 00:08:57.823 "strip_size_kb": 64, 00:08:57.823 "state": "configuring", 00:08:57.823 "raid_level": "concat", 00:08:57.823 "superblock": true, 00:08:57.823 "num_base_bdevs": 3, 00:08:57.823 "num_base_bdevs_discovered": 0, 00:08:57.823 "num_base_bdevs_operational": 3, 00:08:57.823 "base_bdevs_list": [ 00:08:57.823 { 00:08:57.823 "name": "BaseBdev1", 00:08:57.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.823 "is_configured": false, 00:08:57.823 "data_offset": 0, 00:08:57.823 "data_size": 0 00:08:57.823 }, 00:08:57.823 { 00:08:57.823 "name": "BaseBdev2", 00:08:57.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.823 "is_configured": false, 00:08:57.823 "data_offset": 0, 00:08:57.823 "data_size": 0 00:08:57.823 }, 00:08:57.823 { 00:08:57.823 "name": "BaseBdev3", 00:08:57.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.823 "is_configured": false, 00:08:57.823 "data_offset": 0, 00:08:57.823 "data_size": 0 00:08:57.823 } 00:08:57.823 ] 00:08:57.823 }' 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.823 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [2024-12-12 19:37:40.800424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.083 [2024-12-12 19:37:40.800516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [2024-12-12 19:37:40.812406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.083 [2024-12-12 19:37:40.812496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.083 [2024-12-12 19:37:40.812567] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.083 [2024-12-12 19:37:40.812601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.083 [2024-12-12 19:37:40.812634] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.083 [2024-12-12 19:37:40.812668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [2024-12-12 19:37:40.855283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.083 BaseBdev1 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [ 00:08:58.083 { 00:08:58.083 "name": "BaseBdev1", 00:08:58.083 "aliases": [ 00:08:58.083 "5defcc8d-7680-4bb2-8a58-28bd35d175c6" 00:08:58.083 ], 00:08:58.083 "product_name": "Malloc disk", 00:08:58.083 "block_size": 512, 00:08:58.083 "num_blocks": 65536, 00:08:58.083 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:08:58.083 "assigned_rate_limits": { 00:08:58.083 "rw_ios_per_sec": 0, 00:08:58.083 "rw_mbytes_per_sec": 0, 00:08:58.083 "r_mbytes_per_sec": 0, 00:08:58.083 "w_mbytes_per_sec": 0 00:08:58.083 }, 00:08:58.083 "claimed": true, 00:08:58.083 "claim_type": "exclusive_write", 00:08:58.083 "zoned": false, 00:08:58.083 "supported_io_types": { 00:08:58.083 "read": true, 00:08:58.083 "write": true, 00:08:58.083 "unmap": true, 00:08:58.083 "flush": true, 00:08:58.084 "reset": true, 00:08:58.084 "nvme_admin": false, 00:08:58.084 "nvme_io": false, 00:08:58.084 "nvme_io_md": false, 00:08:58.084 "write_zeroes": true, 00:08:58.084 "zcopy": true, 00:08:58.084 "get_zone_info": false, 00:08:58.084 "zone_management": false, 00:08:58.084 "zone_append": false, 00:08:58.084 "compare": false, 00:08:58.084 "compare_and_write": false, 00:08:58.084 "abort": true, 00:08:58.084 "seek_hole": false, 00:08:58.084 "seek_data": false, 00:08:58.084 "copy": true, 00:08:58.084 "nvme_iov_md": false 00:08:58.084 }, 00:08:58.084 "memory_domains": [ 00:08:58.084 { 00:08:58.084 "dma_device_id": "system", 00:08:58.084 "dma_device_type": 1 00:08:58.084 }, 00:08:58.084 { 00:08:58.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.084 "dma_device_type": 2 00:08:58.084 } 00:08:58.084 ], 00:08:58.084 "driver_specific": {} 00:08:58.084 } 00:08:58.084 ] 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.342 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.342 "name": "Existed_Raid", 00:08:58.342 "uuid": "5f04325d-a52b-4dc0-9db1-68fb557ebbba", 00:08:58.342 "strip_size_kb": 64, 00:08:58.342 "state": "configuring", 00:08:58.342 "raid_level": "concat", 00:08:58.342 "superblock": true, 00:08:58.342 "num_base_bdevs": 3, 00:08:58.342 "num_base_bdevs_discovered": 1, 00:08:58.342 "num_base_bdevs_operational": 3, 00:08:58.342 "base_bdevs_list": [ 00:08:58.342 { 00:08:58.342 "name": "BaseBdev1", 00:08:58.342 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:08:58.342 "is_configured": true, 00:08:58.342 "data_offset": 2048, 00:08:58.342 "data_size": 63488 00:08:58.342 }, 00:08:58.342 { 00:08:58.342 "name": "BaseBdev2", 00:08:58.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.342 "is_configured": false, 00:08:58.342 "data_offset": 0, 00:08:58.342 "data_size": 0 00:08:58.342 }, 00:08:58.342 { 00:08:58.342 "name": "BaseBdev3", 00:08:58.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.342 "is_configured": false, 00:08:58.342 "data_offset": 0, 00:08:58.342 "data_size": 0 00:08:58.342 } 00:08:58.342 ] 00:08:58.342 }' 00:08:58.342 19:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.342 19:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.601 [2024-12-12 19:37:41.362490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.601 [2024-12-12 19:37:41.362559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.601 [2024-12-12 19:37:41.374516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.601 [2024-12-12 19:37:41.376312] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.601 [2024-12-12 19:37:41.376415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.601 [2024-12-12 19:37:41.376449] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.601 [2024-12-12 19:37:41.376487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.601 "name": "Existed_Raid", 00:08:58.601 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:08:58.601 "strip_size_kb": 64, 00:08:58.601 "state": "configuring", 00:08:58.601 "raid_level": "concat", 00:08:58.601 "superblock": true, 00:08:58.601 "num_base_bdevs": 3, 00:08:58.601 "num_base_bdevs_discovered": 1, 00:08:58.601 "num_base_bdevs_operational": 3, 00:08:58.601 "base_bdevs_list": [ 00:08:58.601 { 00:08:58.601 "name": "BaseBdev1", 00:08:58.601 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:08:58.601 "is_configured": true, 00:08:58.601 "data_offset": 2048, 00:08:58.601 "data_size": 63488 00:08:58.601 }, 00:08:58.601 { 00:08:58.601 "name": "BaseBdev2", 00:08:58.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.601 "is_configured": false, 00:08:58.601 "data_offset": 0, 00:08:58.601 "data_size": 0 00:08:58.601 }, 00:08:58.601 { 00:08:58.601 "name": "BaseBdev3", 00:08:58.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.601 "is_configured": false, 00:08:58.601 "data_offset": 0, 00:08:58.601 "data_size": 0 00:08:58.601 } 00:08:58.601 ] 00:08:58.601 }' 00:08:58.601 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.602 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.170 [2024-12-12 19:37:41.890498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.170 BaseBdev2 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.170 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.170 [ 00:08:59.170 { 00:08:59.170 "name": "BaseBdev2", 00:08:59.170 "aliases": [ 00:08:59.170 "d421f76f-5e4c-4328-9618-016fdc7456cf" 00:08:59.170 ], 00:08:59.170 "product_name": "Malloc disk", 00:08:59.170 "block_size": 512, 00:08:59.170 "num_blocks": 65536, 00:08:59.170 "uuid": "d421f76f-5e4c-4328-9618-016fdc7456cf", 00:08:59.170 "assigned_rate_limits": { 00:08:59.170 "rw_ios_per_sec": 0, 00:08:59.170 "rw_mbytes_per_sec": 0, 00:08:59.170 "r_mbytes_per_sec": 0, 00:08:59.170 "w_mbytes_per_sec": 0 00:08:59.170 }, 00:08:59.170 "claimed": true, 00:08:59.170 "claim_type": "exclusive_write", 00:08:59.170 "zoned": false, 00:08:59.170 "supported_io_types": { 00:08:59.170 "read": true, 00:08:59.170 "write": true, 00:08:59.170 "unmap": true, 00:08:59.170 "flush": true, 00:08:59.170 "reset": true, 00:08:59.170 "nvme_admin": false, 00:08:59.170 "nvme_io": false, 00:08:59.170 "nvme_io_md": false, 00:08:59.170 "write_zeroes": true, 00:08:59.170 "zcopy": true, 00:08:59.170 "get_zone_info": false, 00:08:59.170 "zone_management": false, 00:08:59.171 "zone_append": false, 00:08:59.171 "compare": false, 00:08:59.171 "compare_and_write": false, 00:08:59.171 "abort": true, 00:08:59.171 "seek_hole": false, 00:08:59.171 "seek_data": false, 00:08:59.171 "copy": true, 00:08:59.171 "nvme_iov_md": false 00:08:59.171 }, 00:08:59.171 "memory_domains": [ 00:08:59.171 { 00:08:59.171 "dma_device_id": "system", 00:08:59.171 "dma_device_type": 1 00:08:59.171 }, 00:08:59.171 { 00:08:59.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.171 "dma_device_type": 2 00:08:59.171 } 00:08:59.171 ], 00:08:59.171 "driver_specific": {} 00:08:59.171 } 00:08:59.171 ] 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.171 "name": "Existed_Raid", 00:08:59.171 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:08:59.171 "strip_size_kb": 64, 00:08:59.171 "state": "configuring", 00:08:59.171 "raid_level": "concat", 00:08:59.171 "superblock": true, 00:08:59.171 "num_base_bdevs": 3, 00:08:59.171 "num_base_bdevs_discovered": 2, 00:08:59.171 "num_base_bdevs_operational": 3, 00:08:59.171 "base_bdevs_list": [ 00:08:59.171 { 00:08:59.171 "name": "BaseBdev1", 00:08:59.171 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:08:59.171 "is_configured": true, 00:08:59.171 "data_offset": 2048, 00:08:59.171 "data_size": 63488 00:08:59.171 }, 00:08:59.171 { 00:08:59.171 "name": "BaseBdev2", 00:08:59.171 "uuid": "d421f76f-5e4c-4328-9618-016fdc7456cf", 00:08:59.171 "is_configured": true, 00:08:59.171 "data_offset": 2048, 00:08:59.171 "data_size": 63488 00:08:59.171 }, 00:08:59.171 { 00:08:59.171 "name": "BaseBdev3", 00:08:59.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.171 "is_configured": false, 00:08:59.171 "data_offset": 0, 00:08:59.171 "data_size": 0 00:08:59.171 } 00:08:59.171 ] 00:08:59.171 }' 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.171 19:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.740 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.740 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.740 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.740 [2024-12-12 19:37:42.419150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.740 [2024-12-12 19:37:42.419407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.740 [2024-12-12 19:37:42.419428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.740 [2024-12-12 19:37:42.419795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.740 BaseBdev3 00:08:59.740 [2024-12-12 19:37:42.419998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.740 [2024-12-12 19:37:42.420017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.740 [2024-12-12 19:37:42.420163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.740 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.740 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.741 [ 00:08:59.741 { 00:08:59.741 "name": "BaseBdev3", 00:08:59.741 "aliases": [ 00:08:59.741 "f0b03739-d033-464a-8063-129927636803" 00:08:59.741 ], 00:08:59.741 "product_name": "Malloc disk", 00:08:59.741 "block_size": 512, 00:08:59.741 "num_blocks": 65536, 00:08:59.741 "uuid": "f0b03739-d033-464a-8063-129927636803", 00:08:59.741 "assigned_rate_limits": { 00:08:59.741 "rw_ios_per_sec": 0, 00:08:59.741 "rw_mbytes_per_sec": 0, 00:08:59.741 "r_mbytes_per_sec": 0, 00:08:59.741 "w_mbytes_per_sec": 0 00:08:59.741 }, 00:08:59.741 "claimed": true, 00:08:59.741 "claim_type": "exclusive_write", 00:08:59.741 "zoned": false, 00:08:59.741 "supported_io_types": { 00:08:59.741 "read": true, 00:08:59.741 "write": true, 00:08:59.741 "unmap": true, 00:08:59.741 "flush": true, 00:08:59.741 "reset": true, 00:08:59.741 "nvme_admin": false, 00:08:59.741 "nvme_io": false, 00:08:59.741 "nvme_io_md": false, 00:08:59.741 "write_zeroes": true, 00:08:59.741 "zcopy": true, 00:08:59.741 "get_zone_info": false, 00:08:59.741 "zone_management": false, 00:08:59.741 "zone_append": false, 00:08:59.741 "compare": false, 00:08:59.741 "compare_and_write": false, 00:08:59.741 "abort": true, 00:08:59.741 "seek_hole": false, 00:08:59.741 "seek_data": false, 00:08:59.741 "copy": true, 00:08:59.741 "nvme_iov_md": false 00:08:59.741 }, 00:08:59.741 "memory_domains": [ 00:08:59.741 { 00:08:59.741 "dma_device_id": "system", 00:08:59.741 "dma_device_type": 1 00:08:59.741 }, 00:08:59.741 { 00:08:59.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.741 "dma_device_type": 2 00:08:59.741 } 00:08:59.741 ], 00:08:59.741 "driver_specific": {} 00:08:59.741 } 00:08:59.741 ] 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.741 "name": "Existed_Raid", 00:08:59.741 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:08:59.741 "strip_size_kb": 64, 00:08:59.741 "state": "online", 00:08:59.741 "raid_level": "concat", 00:08:59.741 "superblock": true, 00:08:59.741 "num_base_bdevs": 3, 00:08:59.741 "num_base_bdevs_discovered": 3, 00:08:59.741 "num_base_bdevs_operational": 3, 00:08:59.741 "base_bdevs_list": [ 00:08:59.741 { 00:08:59.741 "name": "BaseBdev1", 00:08:59.741 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:08:59.741 "is_configured": true, 00:08:59.741 "data_offset": 2048, 00:08:59.741 "data_size": 63488 00:08:59.741 }, 00:08:59.741 { 00:08:59.741 "name": "BaseBdev2", 00:08:59.741 "uuid": "d421f76f-5e4c-4328-9618-016fdc7456cf", 00:08:59.741 "is_configured": true, 00:08:59.741 "data_offset": 2048, 00:08:59.741 "data_size": 63488 00:08:59.741 }, 00:08:59.741 { 00:08:59.741 "name": "BaseBdev3", 00:08:59.741 "uuid": "f0b03739-d033-464a-8063-129927636803", 00:08:59.741 "is_configured": true, 00:08:59.741 "data_offset": 2048, 00:08:59.741 "data_size": 63488 00:08:59.741 } 00:08:59.741 ] 00:08:59.741 }' 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.741 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.311 [2024-12-12 19:37:42.926671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.311 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.311 "name": "Existed_Raid", 00:09:00.311 "aliases": [ 00:09:00.311 "847e3095-a09c-4506-97a5-c8eef1e597c6" 00:09:00.311 ], 00:09:00.311 "product_name": "Raid Volume", 00:09:00.311 "block_size": 512, 00:09:00.311 "num_blocks": 190464, 00:09:00.311 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:09:00.311 "assigned_rate_limits": { 00:09:00.311 "rw_ios_per_sec": 0, 00:09:00.311 "rw_mbytes_per_sec": 0, 00:09:00.311 "r_mbytes_per_sec": 0, 00:09:00.311 "w_mbytes_per_sec": 0 00:09:00.311 }, 00:09:00.311 "claimed": false, 00:09:00.311 "zoned": false, 00:09:00.311 "supported_io_types": { 00:09:00.311 "read": true, 00:09:00.311 "write": true, 00:09:00.311 "unmap": true, 00:09:00.311 "flush": true, 00:09:00.311 "reset": true, 00:09:00.311 "nvme_admin": false, 00:09:00.311 "nvme_io": false, 00:09:00.311 "nvme_io_md": false, 00:09:00.311 "write_zeroes": true, 00:09:00.311 "zcopy": false, 00:09:00.311 "get_zone_info": false, 00:09:00.311 "zone_management": false, 00:09:00.311 "zone_append": false, 00:09:00.311 "compare": false, 00:09:00.311 "compare_and_write": false, 00:09:00.311 "abort": false, 00:09:00.311 "seek_hole": false, 00:09:00.311 "seek_data": false, 00:09:00.311 "copy": false, 00:09:00.311 "nvme_iov_md": false 00:09:00.311 }, 00:09:00.311 "memory_domains": [ 00:09:00.311 { 00:09:00.311 "dma_device_id": "system", 00:09:00.311 "dma_device_type": 1 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.311 "dma_device_type": 2 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "dma_device_id": "system", 00:09:00.311 "dma_device_type": 1 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.311 "dma_device_type": 2 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "dma_device_id": "system", 00:09:00.311 "dma_device_type": 1 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.311 "dma_device_type": 2 00:09:00.311 } 00:09:00.311 ], 00:09:00.311 "driver_specific": { 00:09:00.311 "raid": { 00:09:00.311 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:09:00.311 "strip_size_kb": 64, 00:09:00.311 "state": "online", 00:09:00.311 "raid_level": "concat", 00:09:00.311 "superblock": true, 00:09:00.311 "num_base_bdevs": 3, 00:09:00.311 "num_base_bdevs_discovered": 3, 00:09:00.311 "num_base_bdevs_operational": 3, 00:09:00.312 "base_bdevs_list": [ 00:09:00.312 { 00:09:00.312 "name": "BaseBdev1", 00:09:00.312 "uuid": "5defcc8d-7680-4bb2-8a58-28bd35d175c6", 00:09:00.312 "is_configured": true, 00:09:00.312 "data_offset": 2048, 00:09:00.312 "data_size": 63488 00:09:00.312 }, 00:09:00.312 { 00:09:00.312 "name": "BaseBdev2", 00:09:00.312 "uuid": "d421f76f-5e4c-4328-9618-016fdc7456cf", 00:09:00.312 "is_configured": true, 00:09:00.312 "data_offset": 2048, 00:09:00.312 "data_size": 63488 00:09:00.312 }, 00:09:00.312 { 00:09:00.312 "name": "BaseBdev3", 00:09:00.312 "uuid": "f0b03739-d033-464a-8063-129927636803", 00:09:00.312 "is_configured": true, 00:09:00.312 "data_offset": 2048, 00:09:00.312 "data_size": 63488 00:09:00.312 } 00:09:00.312 ] 00:09:00.312 } 00:09:00.312 } 00:09:00.312 }' 00:09:00.312 19:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.312 BaseBdev2 00:09:00.312 BaseBdev3' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.312 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.574 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.575 [2024-12-12 19:37:43.221859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.575 [2024-12-12 19:37:43.221933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.575 [2024-12-12 19:37:43.221997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.575 "name": "Existed_Raid", 00:09:00.575 "uuid": "847e3095-a09c-4506-97a5-c8eef1e597c6", 00:09:00.575 "strip_size_kb": 64, 00:09:00.575 "state": "offline", 00:09:00.575 "raid_level": "concat", 00:09:00.575 "superblock": true, 00:09:00.575 "num_base_bdevs": 3, 00:09:00.575 "num_base_bdevs_discovered": 2, 00:09:00.575 "num_base_bdevs_operational": 2, 00:09:00.575 "base_bdevs_list": [ 00:09:00.575 { 00:09:00.575 "name": null, 00:09:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.575 "is_configured": false, 00:09:00.575 "data_offset": 0, 00:09:00.575 "data_size": 63488 00:09:00.575 }, 00:09:00.575 { 00:09:00.575 "name": "BaseBdev2", 00:09:00.575 "uuid": "d421f76f-5e4c-4328-9618-016fdc7456cf", 00:09:00.575 "is_configured": true, 00:09:00.575 "data_offset": 2048, 00:09:00.575 "data_size": 63488 00:09:00.575 }, 00:09:00.575 { 00:09:00.575 "name": "BaseBdev3", 00:09:00.575 "uuid": "f0b03739-d033-464a-8063-129927636803", 00:09:00.575 "is_configured": true, 00:09:00.575 "data_offset": 2048, 00:09:00.575 "data_size": 63488 00:09:00.575 } 00:09:00.575 ] 00:09:00.575 }' 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.575 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 [2024-12-12 19:37:43.817868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.145 19:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 [2024-12-12 19:37:43.970748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.145 [2024-12-12 19:37:43.970881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 BaseBdev2 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 [ 00:09:01.405 { 00:09:01.405 "name": "BaseBdev2", 00:09:01.405 "aliases": [ 00:09:01.405 "412368b2-93ad-4c67-a88c-4c983b975b44" 00:09:01.405 ], 00:09:01.405 "product_name": "Malloc disk", 00:09:01.405 "block_size": 512, 00:09:01.405 "num_blocks": 65536, 00:09:01.405 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:01.405 "assigned_rate_limits": { 00:09:01.405 "rw_ios_per_sec": 0, 00:09:01.405 "rw_mbytes_per_sec": 0, 00:09:01.405 "r_mbytes_per_sec": 0, 00:09:01.405 "w_mbytes_per_sec": 0 00:09:01.405 }, 00:09:01.405 "claimed": false, 00:09:01.405 "zoned": false, 00:09:01.405 "supported_io_types": { 00:09:01.405 "read": true, 00:09:01.405 "write": true, 00:09:01.405 "unmap": true, 00:09:01.405 "flush": true, 00:09:01.405 "reset": true, 00:09:01.405 "nvme_admin": false, 00:09:01.405 "nvme_io": false, 00:09:01.405 "nvme_io_md": false, 00:09:01.405 "write_zeroes": true, 00:09:01.405 "zcopy": true, 00:09:01.405 "get_zone_info": false, 00:09:01.405 "zone_management": false, 00:09:01.405 "zone_append": false, 00:09:01.405 "compare": false, 00:09:01.405 "compare_and_write": false, 00:09:01.405 "abort": true, 00:09:01.405 "seek_hole": false, 00:09:01.405 "seek_data": false, 00:09:01.405 "copy": true, 00:09:01.405 "nvme_iov_md": false 00:09:01.405 }, 00:09:01.405 "memory_domains": [ 00:09:01.405 { 00:09:01.405 "dma_device_id": "system", 00:09:01.405 "dma_device_type": 1 00:09:01.405 }, 00:09:01.405 { 00:09:01.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.405 "dma_device_type": 2 00:09:01.405 } 00:09:01.405 ], 00:09:01.405 "driver_specific": {} 00:09:01.405 } 00:09:01.405 ] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 BaseBdev3 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.405 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.665 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.665 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.665 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.665 [ 00:09:01.665 { 00:09:01.665 "name": "BaseBdev3", 00:09:01.665 "aliases": [ 00:09:01.665 "59773fa5-dde1-46c7-96e0-de595a800039" 00:09:01.665 ], 00:09:01.665 "product_name": "Malloc disk", 00:09:01.665 "block_size": 512, 00:09:01.665 "num_blocks": 65536, 00:09:01.665 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:01.665 "assigned_rate_limits": { 00:09:01.665 "rw_ios_per_sec": 0, 00:09:01.665 "rw_mbytes_per_sec": 0, 00:09:01.665 "r_mbytes_per_sec": 0, 00:09:01.666 "w_mbytes_per_sec": 0 00:09:01.666 }, 00:09:01.666 "claimed": false, 00:09:01.666 "zoned": false, 00:09:01.666 "supported_io_types": { 00:09:01.666 "read": true, 00:09:01.666 "write": true, 00:09:01.666 "unmap": true, 00:09:01.666 "flush": true, 00:09:01.666 "reset": true, 00:09:01.666 "nvme_admin": false, 00:09:01.666 "nvme_io": false, 00:09:01.666 "nvme_io_md": false, 00:09:01.666 "write_zeroes": true, 00:09:01.666 "zcopy": true, 00:09:01.666 "get_zone_info": false, 00:09:01.666 "zone_management": false, 00:09:01.666 "zone_append": false, 00:09:01.666 "compare": false, 00:09:01.666 "compare_and_write": false, 00:09:01.666 "abort": true, 00:09:01.666 "seek_hole": false, 00:09:01.666 "seek_data": false, 00:09:01.666 "copy": true, 00:09:01.666 "nvme_iov_md": false 00:09:01.666 }, 00:09:01.666 "memory_domains": [ 00:09:01.666 { 00:09:01.666 "dma_device_id": "system", 00:09:01.666 "dma_device_type": 1 00:09:01.666 }, 00:09:01.666 { 00:09:01.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.666 "dma_device_type": 2 00:09:01.666 } 00:09:01.666 ], 00:09:01.666 "driver_specific": {} 00:09:01.666 } 00:09:01.666 ] 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.666 [2024-12-12 19:37:44.291418] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.666 [2024-12-12 19:37:44.291517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.666 [2024-12-12 19:37:44.291601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.666 [2024-12-12 19:37:44.293585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.666 "name": "Existed_Raid", 00:09:01.666 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:01.666 "strip_size_kb": 64, 00:09:01.666 "state": "configuring", 00:09:01.666 "raid_level": "concat", 00:09:01.666 "superblock": true, 00:09:01.666 "num_base_bdevs": 3, 00:09:01.666 "num_base_bdevs_discovered": 2, 00:09:01.666 "num_base_bdevs_operational": 3, 00:09:01.666 "base_bdevs_list": [ 00:09:01.666 { 00:09:01.666 "name": "BaseBdev1", 00:09:01.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.666 "is_configured": false, 00:09:01.666 "data_offset": 0, 00:09:01.666 "data_size": 0 00:09:01.666 }, 00:09:01.666 { 00:09:01.666 "name": "BaseBdev2", 00:09:01.666 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:01.666 "is_configured": true, 00:09:01.666 "data_offset": 2048, 00:09:01.666 "data_size": 63488 00:09:01.666 }, 00:09:01.666 { 00:09:01.666 "name": "BaseBdev3", 00:09:01.666 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:01.666 "is_configured": true, 00:09:01.666 "data_offset": 2048, 00:09:01.666 "data_size": 63488 00:09:01.666 } 00:09:01.666 ] 00:09:01.666 }' 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.666 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 [2024-12-12 19:37:44.694731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.926 "name": "Existed_Raid", 00:09:01.926 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:01.926 "strip_size_kb": 64, 00:09:01.926 "state": "configuring", 00:09:01.926 "raid_level": "concat", 00:09:01.926 "superblock": true, 00:09:01.926 "num_base_bdevs": 3, 00:09:01.926 "num_base_bdevs_discovered": 1, 00:09:01.926 "num_base_bdevs_operational": 3, 00:09:01.926 "base_bdevs_list": [ 00:09:01.926 { 00:09:01.926 "name": "BaseBdev1", 00:09:01.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.926 "is_configured": false, 00:09:01.926 "data_offset": 0, 00:09:01.926 "data_size": 0 00:09:01.926 }, 00:09:01.926 { 00:09:01.926 "name": null, 00:09:01.926 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:01.926 "is_configured": false, 00:09:01.926 "data_offset": 0, 00:09:01.926 "data_size": 63488 00:09:01.926 }, 00:09:01.926 { 00:09:01.926 "name": "BaseBdev3", 00:09:01.926 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:01.926 "is_configured": true, 00:09:01.926 "data_offset": 2048, 00:09:01.926 "data_size": 63488 00:09:01.926 } 00:09:01.926 ] 00:09:01.926 }' 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.926 19:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 [2024-12-12 19:37:45.251815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.495 BaseBdev1 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 [ 00:09:02.495 { 00:09:02.495 "name": "BaseBdev1", 00:09:02.495 "aliases": [ 00:09:02.495 "200670a2-8a67-48e9-b9d8-a025b14bfa9b" 00:09:02.495 ], 00:09:02.495 "product_name": "Malloc disk", 00:09:02.495 "block_size": 512, 00:09:02.495 "num_blocks": 65536, 00:09:02.495 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:02.495 "assigned_rate_limits": { 00:09:02.495 "rw_ios_per_sec": 0, 00:09:02.495 "rw_mbytes_per_sec": 0, 00:09:02.495 "r_mbytes_per_sec": 0, 00:09:02.495 "w_mbytes_per_sec": 0 00:09:02.495 }, 00:09:02.495 "claimed": true, 00:09:02.495 "claim_type": "exclusive_write", 00:09:02.495 "zoned": false, 00:09:02.495 "supported_io_types": { 00:09:02.495 "read": true, 00:09:02.495 "write": true, 00:09:02.495 "unmap": true, 00:09:02.495 "flush": true, 00:09:02.495 "reset": true, 00:09:02.495 "nvme_admin": false, 00:09:02.495 "nvme_io": false, 00:09:02.495 "nvme_io_md": false, 00:09:02.495 "write_zeroes": true, 00:09:02.495 "zcopy": true, 00:09:02.495 "get_zone_info": false, 00:09:02.495 "zone_management": false, 00:09:02.495 "zone_append": false, 00:09:02.495 "compare": false, 00:09:02.495 "compare_and_write": false, 00:09:02.495 "abort": true, 00:09:02.495 "seek_hole": false, 00:09:02.495 "seek_data": false, 00:09:02.495 "copy": true, 00:09:02.495 "nvme_iov_md": false 00:09:02.495 }, 00:09:02.495 "memory_domains": [ 00:09:02.495 { 00:09:02.495 "dma_device_id": "system", 00:09:02.495 "dma_device_type": 1 00:09:02.495 }, 00:09:02.495 { 00:09:02.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.495 "dma_device_type": 2 00:09:02.495 } 00:09:02.495 ], 00:09:02.495 "driver_specific": {} 00:09:02.495 } 00:09:02.495 ] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.495 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.496 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.755 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.755 "name": "Existed_Raid", 00:09:02.755 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:02.755 "strip_size_kb": 64, 00:09:02.755 "state": "configuring", 00:09:02.755 "raid_level": "concat", 00:09:02.755 "superblock": true, 00:09:02.755 "num_base_bdevs": 3, 00:09:02.755 "num_base_bdevs_discovered": 2, 00:09:02.755 "num_base_bdevs_operational": 3, 00:09:02.755 "base_bdevs_list": [ 00:09:02.755 { 00:09:02.755 "name": "BaseBdev1", 00:09:02.755 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:02.755 "is_configured": true, 00:09:02.755 "data_offset": 2048, 00:09:02.755 "data_size": 63488 00:09:02.755 }, 00:09:02.755 { 00:09:02.755 "name": null, 00:09:02.755 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:02.755 "is_configured": false, 00:09:02.755 "data_offset": 0, 00:09:02.755 "data_size": 63488 00:09:02.755 }, 00:09:02.755 { 00:09:02.755 "name": "BaseBdev3", 00:09:02.755 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:02.755 "is_configured": true, 00:09:02.755 "data_offset": 2048, 00:09:02.755 "data_size": 63488 00:09:02.755 } 00:09:02.755 ] 00:09:02.755 }' 00:09:02.755 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.755 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.014 [2024-12-12 19:37:45.826881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.014 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.273 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.274 "name": "Existed_Raid", 00:09:03.274 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:03.274 "strip_size_kb": 64, 00:09:03.274 "state": "configuring", 00:09:03.274 "raid_level": "concat", 00:09:03.274 "superblock": true, 00:09:03.274 "num_base_bdevs": 3, 00:09:03.274 "num_base_bdevs_discovered": 1, 00:09:03.274 "num_base_bdevs_operational": 3, 00:09:03.274 "base_bdevs_list": [ 00:09:03.274 { 00:09:03.274 "name": "BaseBdev1", 00:09:03.274 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:03.274 "is_configured": true, 00:09:03.274 "data_offset": 2048, 00:09:03.274 "data_size": 63488 00:09:03.274 }, 00:09:03.274 { 00:09:03.274 "name": null, 00:09:03.274 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:03.274 "is_configured": false, 00:09:03.274 "data_offset": 0, 00:09:03.274 "data_size": 63488 00:09:03.274 }, 00:09:03.274 { 00:09:03.274 "name": null, 00:09:03.274 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:03.274 "is_configured": false, 00:09:03.274 "data_offset": 0, 00:09:03.274 "data_size": 63488 00:09:03.274 } 00:09:03.274 ] 00:09:03.274 }' 00:09:03.274 19:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.274 19:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.533 [2024-12-12 19:37:46.318078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.533 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.792 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.792 "name": "Existed_Raid", 00:09:03.792 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:03.792 "strip_size_kb": 64, 00:09:03.792 "state": "configuring", 00:09:03.792 "raid_level": "concat", 00:09:03.792 "superblock": true, 00:09:03.792 "num_base_bdevs": 3, 00:09:03.792 "num_base_bdevs_discovered": 2, 00:09:03.792 "num_base_bdevs_operational": 3, 00:09:03.792 "base_bdevs_list": [ 00:09:03.792 { 00:09:03.792 "name": "BaseBdev1", 00:09:03.792 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:03.792 "is_configured": true, 00:09:03.792 "data_offset": 2048, 00:09:03.792 "data_size": 63488 00:09:03.792 }, 00:09:03.792 { 00:09:03.792 "name": null, 00:09:03.792 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:03.792 "is_configured": false, 00:09:03.792 "data_offset": 0, 00:09:03.792 "data_size": 63488 00:09:03.792 }, 00:09:03.792 { 00:09:03.792 "name": "BaseBdev3", 00:09:03.792 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:03.792 "is_configured": true, 00:09:03.792 "data_offset": 2048, 00:09:03.792 "data_size": 63488 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 }' 00:09:03.792 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.792 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 [2024-12-12 19:37:46.789291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.051 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.310 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.310 "name": "Existed_Raid", 00:09:04.310 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:04.310 "strip_size_kb": 64, 00:09:04.310 "state": "configuring", 00:09:04.310 "raid_level": "concat", 00:09:04.310 "superblock": true, 00:09:04.310 "num_base_bdevs": 3, 00:09:04.310 "num_base_bdevs_discovered": 1, 00:09:04.310 "num_base_bdevs_operational": 3, 00:09:04.310 "base_bdevs_list": [ 00:09:04.310 { 00:09:04.310 "name": null, 00:09:04.310 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:04.310 "is_configured": false, 00:09:04.310 "data_offset": 0, 00:09:04.310 "data_size": 63488 00:09:04.310 }, 00:09:04.310 { 00:09:04.310 "name": null, 00:09:04.310 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:04.310 "is_configured": false, 00:09:04.310 "data_offset": 0, 00:09:04.310 "data_size": 63488 00:09:04.311 }, 00:09:04.311 { 00:09:04.311 "name": "BaseBdev3", 00:09:04.311 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:04.311 "is_configured": true, 00:09:04.311 "data_offset": 2048, 00:09:04.311 "data_size": 63488 00:09:04.311 } 00:09:04.311 ] 00:09:04.311 }' 00:09:04.311 19:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.311 19:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.570 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 [2024-12-12 19:37:47.409502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.829 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.830 "name": "Existed_Raid", 00:09:04.830 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:04.830 "strip_size_kb": 64, 00:09:04.830 "state": "configuring", 00:09:04.830 "raid_level": "concat", 00:09:04.830 "superblock": true, 00:09:04.830 "num_base_bdevs": 3, 00:09:04.830 "num_base_bdevs_discovered": 2, 00:09:04.830 "num_base_bdevs_operational": 3, 00:09:04.830 "base_bdevs_list": [ 00:09:04.830 { 00:09:04.830 "name": null, 00:09:04.830 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:04.830 "is_configured": false, 00:09:04.830 "data_offset": 0, 00:09:04.830 "data_size": 63488 00:09:04.830 }, 00:09:04.830 { 00:09:04.830 "name": "BaseBdev2", 00:09:04.830 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:04.830 "is_configured": true, 00:09:04.830 "data_offset": 2048, 00:09:04.830 "data_size": 63488 00:09:04.830 }, 00:09:04.830 { 00:09:04.830 "name": "BaseBdev3", 00:09:04.830 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:04.830 "is_configured": true, 00:09:04.830 "data_offset": 2048, 00:09:04.830 "data_size": 63488 00:09:04.830 } 00:09:04.830 ] 00:09:04.830 }' 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.830 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 200670a2-8a67-48e9-b9d8-a025b14bfa9b 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.090 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.349 [2024-12-12 19:37:47.950571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.349 [2024-12-12 19:37:47.950905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.349 [2024-12-12 19:37:47.950927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.349 [2024-12-12 19:37:47.951166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:05.349 [2024-12-12 19:37:47.951302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.349 [2024-12-12 19:37:47.951312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:05.349 [2024-12-12 19:37:47.951446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.349 NewBaseBdev 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.349 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.349 [ 00:09:05.349 { 00:09:05.349 "name": "NewBaseBdev", 00:09:05.349 "aliases": [ 00:09:05.349 "200670a2-8a67-48e9-b9d8-a025b14bfa9b" 00:09:05.349 ], 00:09:05.349 "product_name": "Malloc disk", 00:09:05.349 "block_size": 512, 00:09:05.349 "num_blocks": 65536, 00:09:05.349 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:05.349 "assigned_rate_limits": { 00:09:05.349 "rw_ios_per_sec": 0, 00:09:05.349 "rw_mbytes_per_sec": 0, 00:09:05.349 "r_mbytes_per_sec": 0, 00:09:05.349 "w_mbytes_per_sec": 0 00:09:05.349 }, 00:09:05.349 "claimed": true, 00:09:05.349 "claim_type": "exclusive_write", 00:09:05.349 "zoned": false, 00:09:05.349 "supported_io_types": { 00:09:05.349 "read": true, 00:09:05.349 "write": true, 00:09:05.349 "unmap": true, 00:09:05.349 "flush": true, 00:09:05.349 "reset": true, 00:09:05.349 "nvme_admin": false, 00:09:05.349 "nvme_io": false, 00:09:05.349 "nvme_io_md": false, 00:09:05.349 "write_zeroes": true, 00:09:05.349 "zcopy": true, 00:09:05.349 "get_zone_info": false, 00:09:05.349 "zone_management": false, 00:09:05.349 "zone_append": false, 00:09:05.349 "compare": false, 00:09:05.349 "compare_and_write": false, 00:09:05.349 "abort": true, 00:09:05.349 "seek_hole": false, 00:09:05.349 "seek_data": false, 00:09:05.349 "copy": true, 00:09:05.350 "nvme_iov_md": false 00:09:05.350 }, 00:09:05.350 "memory_domains": [ 00:09:05.350 { 00:09:05.350 "dma_device_id": "system", 00:09:05.350 "dma_device_type": 1 00:09:05.350 }, 00:09:05.350 { 00:09:05.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.350 "dma_device_type": 2 00:09:05.350 } 00:09:05.350 ], 00:09:05.350 "driver_specific": {} 00:09:05.350 } 00:09:05.350 ] 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.350 19:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.350 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.350 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.350 "name": "Existed_Raid", 00:09:05.350 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:05.350 "strip_size_kb": 64, 00:09:05.350 "state": "online", 00:09:05.350 "raid_level": "concat", 00:09:05.350 "superblock": true, 00:09:05.350 "num_base_bdevs": 3, 00:09:05.350 "num_base_bdevs_discovered": 3, 00:09:05.350 "num_base_bdevs_operational": 3, 00:09:05.350 "base_bdevs_list": [ 00:09:05.350 { 00:09:05.350 "name": "NewBaseBdev", 00:09:05.350 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:05.350 "is_configured": true, 00:09:05.350 "data_offset": 2048, 00:09:05.350 "data_size": 63488 00:09:05.350 }, 00:09:05.350 { 00:09:05.350 "name": "BaseBdev2", 00:09:05.350 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:05.350 "is_configured": true, 00:09:05.350 "data_offset": 2048, 00:09:05.350 "data_size": 63488 00:09:05.350 }, 00:09:05.350 { 00:09:05.350 "name": "BaseBdev3", 00:09:05.350 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:05.350 "is_configured": true, 00:09:05.350 "data_offset": 2048, 00:09:05.350 "data_size": 63488 00:09:05.350 } 00:09:05.350 ] 00:09:05.350 }' 00:09:05.350 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.350 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.919 [2024-12-12 19:37:48.466111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.919 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.919 "name": "Existed_Raid", 00:09:05.919 "aliases": [ 00:09:05.920 "4ece2357-3119-47f6-8596-1016f06c3859" 00:09:05.920 ], 00:09:05.920 "product_name": "Raid Volume", 00:09:05.920 "block_size": 512, 00:09:05.920 "num_blocks": 190464, 00:09:05.920 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:05.920 "assigned_rate_limits": { 00:09:05.920 "rw_ios_per_sec": 0, 00:09:05.920 "rw_mbytes_per_sec": 0, 00:09:05.920 "r_mbytes_per_sec": 0, 00:09:05.920 "w_mbytes_per_sec": 0 00:09:05.920 }, 00:09:05.920 "claimed": false, 00:09:05.920 "zoned": false, 00:09:05.920 "supported_io_types": { 00:09:05.920 "read": true, 00:09:05.920 "write": true, 00:09:05.920 "unmap": true, 00:09:05.920 "flush": true, 00:09:05.920 "reset": true, 00:09:05.920 "nvme_admin": false, 00:09:05.920 "nvme_io": false, 00:09:05.920 "nvme_io_md": false, 00:09:05.920 "write_zeroes": true, 00:09:05.920 "zcopy": false, 00:09:05.920 "get_zone_info": false, 00:09:05.920 "zone_management": false, 00:09:05.920 "zone_append": false, 00:09:05.920 "compare": false, 00:09:05.920 "compare_and_write": false, 00:09:05.920 "abort": false, 00:09:05.920 "seek_hole": false, 00:09:05.920 "seek_data": false, 00:09:05.920 "copy": false, 00:09:05.920 "nvme_iov_md": false 00:09:05.920 }, 00:09:05.920 "memory_domains": [ 00:09:05.920 { 00:09:05.920 "dma_device_id": "system", 00:09:05.920 "dma_device_type": 1 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.920 "dma_device_type": 2 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "dma_device_id": "system", 00:09:05.920 "dma_device_type": 1 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.920 "dma_device_type": 2 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "dma_device_id": "system", 00:09:05.920 "dma_device_type": 1 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.920 "dma_device_type": 2 00:09:05.920 } 00:09:05.920 ], 00:09:05.920 "driver_specific": { 00:09:05.920 "raid": { 00:09:05.920 "uuid": "4ece2357-3119-47f6-8596-1016f06c3859", 00:09:05.920 "strip_size_kb": 64, 00:09:05.920 "state": "online", 00:09:05.920 "raid_level": "concat", 00:09:05.920 "superblock": true, 00:09:05.920 "num_base_bdevs": 3, 00:09:05.920 "num_base_bdevs_discovered": 3, 00:09:05.920 "num_base_bdevs_operational": 3, 00:09:05.920 "base_bdevs_list": [ 00:09:05.920 { 00:09:05.920 "name": "NewBaseBdev", 00:09:05.920 "uuid": "200670a2-8a67-48e9-b9d8-a025b14bfa9b", 00:09:05.920 "is_configured": true, 00:09:05.920 "data_offset": 2048, 00:09:05.920 "data_size": 63488 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "name": "BaseBdev2", 00:09:05.920 "uuid": "412368b2-93ad-4c67-a88c-4c983b975b44", 00:09:05.920 "is_configured": true, 00:09:05.920 "data_offset": 2048, 00:09:05.920 "data_size": 63488 00:09:05.920 }, 00:09:05.920 { 00:09:05.920 "name": "BaseBdev3", 00:09:05.920 "uuid": "59773fa5-dde1-46c7-96e0-de595a800039", 00:09:05.920 "is_configured": true, 00:09:05.920 "data_offset": 2048, 00:09:05.920 "data_size": 63488 00:09:05.920 } 00:09:05.920 ] 00:09:05.920 } 00:09:05.920 } 00:09:05.920 }' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.920 BaseBdev2 00:09:05.920 BaseBdev3' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.920 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.180 [2024-12-12 19:37:48.765264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.180 [2024-12-12 19:37:48.765334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.180 [2024-12-12 19:37:48.765434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.180 [2024-12-12 19:37:48.765524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.180 [2024-12-12 19:37:48.765683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67916 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67916 ']' 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67916 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67916 00:09:06.180 killing process with pid 67916 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.180 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67916' 00:09:06.181 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67916 00:09:06.181 [2024-12-12 19:37:48.813233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.181 19:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67916 00:09:06.440 [2024-12-12 19:37:49.115061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.855 19:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.855 00:09:07.855 real 0m10.780s 00:09:07.855 user 0m17.252s 00:09:07.855 sys 0m1.840s 00:09:07.855 19:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.855 19:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.855 ************************************ 00:09:07.855 END TEST raid_state_function_test_sb 00:09:07.855 ************************************ 00:09:07.855 19:37:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:07.855 19:37:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:07.855 19:37:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.855 19:37:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.855 ************************************ 00:09:07.855 START TEST raid_superblock_test 00:09:07.855 ************************************ 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68546 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:07.855 19:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68546 00:09:07.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68546 ']' 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.856 19:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.856 [2024-12-12 19:37:50.377713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:07.856 [2024-12-12 19:37:50.377842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68546 ] 00:09:07.856 [2024-12-12 19:37:50.547300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.135 [2024-12-12 19:37:50.669483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.135 [2024-12-12 19:37:50.866604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.135 [2024-12-12 19:37:50.866666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.394 malloc1 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.394 [2024-12-12 19:37:51.228212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.394 [2024-12-12 19:37:51.228311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.394 [2024-12-12 19:37:51.228364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:08.394 [2024-12-12 19:37:51.228390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.394 [2024-12-12 19:37:51.230480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.394 [2024-12-12 19:37:51.230560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.394 pt1 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.394 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.654 malloc2 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.654 [2024-12-12 19:37:51.281430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.654 [2024-12-12 19:37:51.281524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.654 [2024-12-12 19:37:51.281574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:08.654 [2024-12-12 19:37:51.281636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.654 [2024-12-12 19:37:51.283789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.654 [2024-12-12 19:37:51.283861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.654 pt2 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.654 malloc3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.654 [2024-12-12 19:37:51.347491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.654 [2024-12-12 19:37:51.347596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.654 [2024-12-12 19:37:51.347633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:08.654 [2024-12-12 19:37:51.347661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.654 [2024-12-12 19:37:51.349830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.654 [2024-12-12 19:37:51.349901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.654 pt3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.654 [2024-12-12 19:37:51.359521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.654 [2024-12-12 19:37:51.361398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.654 [2024-12-12 19:37:51.361508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.654 [2024-12-12 19:37:51.361773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:08.654 [2024-12-12 19:37:51.361834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.654 [2024-12-12 19:37:51.362155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.654 [2024-12-12 19:37:51.362373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:08.654 [2024-12-12 19:37:51.362418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:08.654 [2024-12-12 19:37:51.362665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.654 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.655 "name": "raid_bdev1", 00:09:08.655 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:08.655 "strip_size_kb": 64, 00:09:08.655 "state": "online", 00:09:08.655 "raid_level": "concat", 00:09:08.655 "superblock": true, 00:09:08.655 "num_base_bdevs": 3, 00:09:08.655 "num_base_bdevs_discovered": 3, 00:09:08.655 "num_base_bdevs_operational": 3, 00:09:08.655 "base_bdevs_list": [ 00:09:08.655 { 00:09:08.655 "name": "pt1", 00:09:08.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.655 "is_configured": true, 00:09:08.655 "data_offset": 2048, 00:09:08.655 "data_size": 63488 00:09:08.655 }, 00:09:08.655 { 00:09:08.655 "name": "pt2", 00:09:08.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.655 "is_configured": true, 00:09:08.655 "data_offset": 2048, 00:09:08.655 "data_size": 63488 00:09:08.655 }, 00:09:08.655 { 00:09:08.655 "name": "pt3", 00:09:08.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.655 "is_configured": true, 00:09:08.655 "data_offset": 2048, 00:09:08.655 "data_size": 63488 00:09:08.655 } 00:09:08.655 ] 00:09:08.655 }' 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.655 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.225 [2024-12-12 19:37:51.843002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.225 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.225 "name": "raid_bdev1", 00:09:09.225 "aliases": [ 00:09:09.225 "58f5e85e-ff07-4163-8676-9970f62f06be" 00:09:09.225 ], 00:09:09.225 "product_name": "Raid Volume", 00:09:09.225 "block_size": 512, 00:09:09.225 "num_blocks": 190464, 00:09:09.226 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:09.226 "assigned_rate_limits": { 00:09:09.226 "rw_ios_per_sec": 0, 00:09:09.226 "rw_mbytes_per_sec": 0, 00:09:09.226 "r_mbytes_per_sec": 0, 00:09:09.226 "w_mbytes_per_sec": 0 00:09:09.226 }, 00:09:09.226 "claimed": false, 00:09:09.226 "zoned": false, 00:09:09.226 "supported_io_types": { 00:09:09.226 "read": true, 00:09:09.226 "write": true, 00:09:09.226 "unmap": true, 00:09:09.226 "flush": true, 00:09:09.226 "reset": true, 00:09:09.226 "nvme_admin": false, 00:09:09.226 "nvme_io": false, 00:09:09.226 "nvme_io_md": false, 00:09:09.226 "write_zeroes": true, 00:09:09.226 "zcopy": false, 00:09:09.226 "get_zone_info": false, 00:09:09.226 "zone_management": false, 00:09:09.226 "zone_append": false, 00:09:09.226 "compare": false, 00:09:09.226 "compare_and_write": false, 00:09:09.226 "abort": false, 00:09:09.226 "seek_hole": false, 00:09:09.226 "seek_data": false, 00:09:09.226 "copy": false, 00:09:09.226 "nvme_iov_md": false 00:09:09.226 }, 00:09:09.226 "memory_domains": [ 00:09:09.226 { 00:09:09.226 "dma_device_id": "system", 00:09:09.226 "dma_device_type": 1 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.226 "dma_device_type": 2 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "system", 00:09:09.226 "dma_device_type": 1 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.226 "dma_device_type": 2 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "system", 00:09:09.226 "dma_device_type": 1 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.226 "dma_device_type": 2 00:09:09.226 } 00:09:09.226 ], 00:09:09.226 "driver_specific": { 00:09:09.226 "raid": { 00:09:09.226 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:09.226 "strip_size_kb": 64, 00:09:09.226 "state": "online", 00:09:09.226 "raid_level": "concat", 00:09:09.226 "superblock": true, 00:09:09.226 "num_base_bdevs": 3, 00:09:09.226 "num_base_bdevs_discovered": 3, 00:09:09.226 "num_base_bdevs_operational": 3, 00:09:09.226 "base_bdevs_list": [ 00:09:09.226 { 00:09:09.226 "name": "pt1", 00:09:09.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 2048, 00:09:09.226 "data_size": 63488 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "name": "pt2", 00:09:09.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 2048, 00:09:09.226 "data_size": 63488 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "name": "pt3", 00:09:09.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 2048, 00:09:09.226 "data_size": 63488 00:09:09.226 } 00:09:09.226 ] 00:09:09.226 } 00:09:09.226 } 00:09:09.226 }' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.226 pt2 00:09:09.226 pt3' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 19:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.486 [2024-12-12 19:37:52.130462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=58f5e85e-ff07-4163-8676-9970f62f06be 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 58f5e85e-ff07-4163-8676-9970f62f06be ']' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 [2024-12-12 19:37:52.158085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.486 [2024-12-12 19:37:52.158113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.486 [2024-12-12 19:37:52.158185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.486 [2024-12-12 19:37:52.158249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.486 [2024-12-12 19:37:52.158258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.486 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.487 [2024-12-12 19:37:52.309952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:09.487 [2024-12-12 19:37:52.311932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:09.487 [2024-12-12 19:37:52.312053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:09.487 [2024-12-12 19:37:52.312130] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:09.487 [2024-12-12 19:37:52.312273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:09.487 [2024-12-12 19:37:52.312375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:09.487 [2024-12-12 19:37:52.312431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.487 [2024-12-12 19:37:52.312443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:09.487 request: 00:09:09.487 { 00:09:09.487 "name": "raid_bdev1", 00:09:09.487 "raid_level": "concat", 00:09:09.487 "base_bdevs": [ 00:09:09.487 "malloc1", 00:09:09.487 "malloc2", 00:09:09.487 "malloc3" 00:09:09.487 ], 00:09:09.487 "strip_size_kb": 64, 00:09:09.487 "superblock": false, 00:09:09.487 "method": "bdev_raid_create", 00:09:09.487 "req_id": 1 00:09:09.487 } 00:09:09.487 Got JSON-RPC error response 00:09:09.487 response: 00:09:09.487 { 00:09:09.487 "code": -17, 00:09:09.487 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:09.487 } 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:09.487 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.747 [2024-12-12 19:37:52.377771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.747 [2024-12-12 19:37:52.377838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.747 [2024-12-12 19:37:52.377861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:09.747 [2024-12-12 19:37:52.377871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.747 [2024-12-12 19:37:52.380221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.747 [2024-12-12 19:37:52.380298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.747 [2024-12-12 19:37:52.380391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:09.747 [2024-12-12 19:37:52.380451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.747 pt1 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.747 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.748 "name": "raid_bdev1", 00:09:09.748 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:09.748 "strip_size_kb": 64, 00:09:09.748 "state": "configuring", 00:09:09.748 "raid_level": "concat", 00:09:09.748 "superblock": true, 00:09:09.748 "num_base_bdevs": 3, 00:09:09.748 "num_base_bdevs_discovered": 1, 00:09:09.748 "num_base_bdevs_operational": 3, 00:09:09.748 "base_bdevs_list": [ 00:09:09.748 { 00:09:09.748 "name": "pt1", 00:09:09.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.748 "is_configured": true, 00:09:09.748 "data_offset": 2048, 00:09:09.748 "data_size": 63488 00:09:09.748 }, 00:09:09.748 { 00:09:09.748 "name": null, 00:09:09.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.748 "is_configured": false, 00:09:09.748 "data_offset": 2048, 00:09:09.748 "data_size": 63488 00:09:09.748 }, 00:09:09.748 { 00:09:09.748 "name": null, 00:09:09.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.748 "is_configured": false, 00:09:09.748 "data_offset": 2048, 00:09:09.748 "data_size": 63488 00:09:09.748 } 00:09:09.748 ] 00:09:09.748 }' 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.748 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.007 [2024-12-12 19:37:52.829107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.007 [2024-12-12 19:37:52.829226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.007 [2024-12-12 19:37:52.829272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:10.007 [2024-12-12 19:37:52.829300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.007 [2024-12-12 19:37:52.829886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.007 [2024-12-12 19:37:52.829958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.007 [2024-12-12 19:37:52.830104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.007 [2024-12-12 19:37:52.830181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.007 pt2 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.007 [2024-12-12 19:37:52.841087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.007 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.266 "name": "raid_bdev1", 00:09:10.266 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:10.266 "strip_size_kb": 64, 00:09:10.266 "state": "configuring", 00:09:10.266 "raid_level": "concat", 00:09:10.266 "superblock": true, 00:09:10.266 "num_base_bdevs": 3, 00:09:10.266 "num_base_bdevs_discovered": 1, 00:09:10.266 "num_base_bdevs_operational": 3, 00:09:10.266 "base_bdevs_list": [ 00:09:10.266 { 00:09:10.266 "name": "pt1", 00:09:10.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.266 "is_configured": true, 00:09:10.266 "data_offset": 2048, 00:09:10.266 "data_size": 63488 00:09:10.266 }, 00:09:10.266 { 00:09:10.266 "name": null, 00:09:10.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.266 "is_configured": false, 00:09:10.266 "data_offset": 0, 00:09:10.266 "data_size": 63488 00:09:10.266 }, 00:09:10.266 { 00:09:10.266 "name": null, 00:09:10.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.266 "is_configured": false, 00:09:10.266 "data_offset": 2048, 00:09:10.266 "data_size": 63488 00:09:10.266 } 00:09:10.266 ] 00:09:10.266 }' 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.266 19:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 [2024-12-12 19:37:53.268364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.527 [2024-12-12 19:37:53.268520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.527 [2024-12-12 19:37:53.268558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:10.527 [2024-12-12 19:37:53.268594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.527 [2024-12-12 19:37:53.269073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.527 [2024-12-12 19:37:53.269097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.527 [2024-12-12 19:37:53.269182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.527 [2024-12-12 19:37:53.269207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.527 pt2 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 [2024-12-12 19:37:53.280353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.527 [2024-12-12 19:37:53.280429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.527 [2024-12-12 19:37:53.280446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:10.527 [2024-12-12 19:37:53.280457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.527 [2024-12-12 19:37:53.280932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.527 [2024-12-12 19:37:53.280968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.527 [2024-12-12 19:37:53.281054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:10.527 [2024-12-12 19:37:53.281111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.527 [2024-12-12 19:37:53.281243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.527 [2024-12-12 19:37:53.281254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.527 [2024-12-12 19:37:53.281509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:10.527 [2024-12-12 19:37:53.281748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.527 [2024-12-12 19:37:53.281762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:10.527 [2024-12-12 19:37:53.281922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.527 pt3 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.527 "name": "raid_bdev1", 00:09:10.527 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:10.527 "strip_size_kb": 64, 00:09:10.527 "state": "online", 00:09:10.527 "raid_level": "concat", 00:09:10.527 "superblock": true, 00:09:10.527 "num_base_bdevs": 3, 00:09:10.527 "num_base_bdevs_discovered": 3, 00:09:10.527 "num_base_bdevs_operational": 3, 00:09:10.527 "base_bdevs_list": [ 00:09:10.527 { 00:09:10.527 "name": "pt1", 00:09:10.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.527 "is_configured": true, 00:09:10.527 "data_offset": 2048, 00:09:10.527 "data_size": 63488 00:09:10.527 }, 00:09:10.527 { 00:09:10.527 "name": "pt2", 00:09:10.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.527 "is_configured": true, 00:09:10.527 "data_offset": 2048, 00:09:10.527 "data_size": 63488 00:09:10.527 }, 00:09:10.527 { 00:09:10.527 "name": "pt3", 00:09:10.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.527 "is_configured": true, 00:09:10.527 "data_offset": 2048, 00:09:10.527 "data_size": 63488 00:09:10.527 } 00:09:10.527 ] 00:09:10.527 }' 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.527 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.097 [2024-12-12 19:37:53.759872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.097 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.097 "name": "raid_bdev1", 00:09:11.097 "aliases": [ 00:09:11.097 "58f5e85e-ff07-4163-8676-9970f62f06be" 00:09:11.097 ], 00:09:11.097 "product_name": "Raid Volume", 00:09:11.097 "block_size": 512, 00:09:11.097 "num_blocks": 190464, 00:09:11.097 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:11.097 "assigned_rate_limits": { 00:09:11.097 "rw_ios_per_sec": 0, 00:09:11.097 "rw_mbytes_per_sec": 0, 00:09:11.097 "r_mbytes_per_sec": 0, 00:09:11.097 "w_mbytes_per_sec": 0 00:09:11.097 }, 00:09:11.097 "claimed": false, 00:09:11.097 "zoned": false, 00:09:11.097 "supported_io_types": { 00:09:11.097 "read": true, 00:09:11.097 "write": true, 00:09:11.097 "unmap": true, 00:09:11.097 "flush": true, 00:09:11.097 "reset": true, 00:09:11.097 "nvme_admin": false, 00:09:11.097 "nvme_io": false, 00:09:11.097 "nvme_io_md": false, 00:09:11.097 "write_zeroes": true, 00:09:11.097 "zcopy": false, 00:09:11.097 "get_zone_info": false, 00:09:11.097 "zone_management": false, 00:09:11.097 "zone_append": false, 00:09:11.097 "compare": false, 00:09:11.097 "compare_and_write": false, 00:09:11.097 "abort": false, 00:09:11.097 "seek_hole": false, 00:09:11.097 "seek_data": false, 00:09:11.097 "copy": false, 00:09:11.097 "nvme_iov_md": false 00:09:11.097 }, 00:09:11.097 "memory_domains": [ 00:09:11.097 { 00:09:11.097 "dma_device_id": "system", 00:09:11.097 "dma_device_type": 1 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.097 "dma_device_type": 2 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "dma_device_id": "system", 00:09:11.097 "dma_device_type": 1 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.097 "dma_device_type": 2 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "dma_device_id": "system", 00:09:11.097 "dma_device_type": 1 00:09:11.097 }, 00:09:11.097 { 00:09:11.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.097 "dma_device_type": 2 00:09:11.097 } 00:09:11.097 ], 00:09:11.097 "driver_specific": { 00:09:11.097 "raid": { 00:09:11.098 "uuid": "58f5e85e-ff07-4163-8676-9970f62f06be", 00:09:11.098 "strip_size_kb": 64, 00:09:11.098 "state": "online", 00:09:11.098 "raid_level": "concat", 00:09:11.098 "superblock": true, 00:09:11.098 "num_base_bdevs": 3, 00:09:11.098 "num_base_bdevs_discovered": 3, 00:09:11.098 "num_base_bdevs_operational": 3, 00:09:11.098 "base_bdevs_list": [ 00:09:11.098 { 00:09:11.098 "name": "pt1", 00:09:11.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.098 "is_configured": true, 00:09:11.098 "data_offset": 2048, 00:09:11.098 "data_size": 63488 00:09:11.098 }, 00:09:11.098 { 00:09:11.098 "name": "pt2", 00:09:11.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.098 "is_configured": true, 00:09:11.098 "data_offset": 2048, 00:09:11.098 "data_size": 63488 00:09:11.098 }, 00:09:11.098 { 00:09:11.098 "name": "pt3", 00:09:11.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.098 "is_configured": true, 00:09:11.098 "data_offset": 2048, 00:09:11.098 "data_size": 63488 00:09:11.098 } 00:09:11.098 ] 00:09:11.098 } 00:09:11.098 } 00:09:11.098 }' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.098 pt2 00:09:11.098 pt3' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.098 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.358 19:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:11.358 [2024-12-12 19:37:54.023380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 58f5e85e-ff07-4163-8676-9970f62f06be '!=' 58f5e85e-ff07-4163-8676-9970f62f06be ']' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68546 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68546 ']' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68546 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68546 00:09:11.358 killing process with pid 68546 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68546' 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68546 00:09:11.358 [2024-12-12 19:37:54.093850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.358 [2024-12-12 19:37:54.093937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.358 [2024-12-12 19:37:54.093999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.358 [2024-12-12 19:37:54.094010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.358 19:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68546 00:09:11.618 [2024-12-12 19:37:54.395134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.997 19:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:12.997 00:09:12.997 real 0m5.215s 00:09:12.997 user 0m7.460s 00:09:12.997 sys 0m0.910s 00:09:12.997 19:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.997 ************************************ 00:09:12.997 END TEST raid_superblock_test 00:09:12.997 ************************************ 00:09:12.997 19:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.997 19:37:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:12.997 19:37:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.997 19:37:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.997 19:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.997 ************************************ 00:09:12.997 START TEST raid_read_error_test 00:09:12.997 ************************************ 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.997 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sVSIIGGq9q 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68799 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68799 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68799 ']' 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.998 19:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.998 [2024-12-12 19:37:55.685180] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:12.998 [2024-12-12 19:37:55.685399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68799 ] 00:09:13.258 [2024-12-12 19:37:55.861437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.258 [2024-12-12 19:37:55.980984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.518 [2024-12-12 19:37:56.178171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.518 [2024-12-12 19:37:56.178245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.778 BaseBdev1_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.778 true 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.778 [2024-12-12 19:37:56.565158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.778 [2024-12-12 19:37:56.565214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.778 [2024-12-12 19:37:56.565232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.778 [2024-12-12 19:37:56.565242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.778 [2024-12-12 19:37:56.567306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.778 [2024-12-12 19:37:56.567414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.778 BaseBdev1 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.778 BaseBdev2_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.778 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.038 true 00:09:14.038 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.038 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.038 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.038 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.038 [2024-12-12 19:37:56.631092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.039 [2024-12-12 19:37:56.631206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.039 [2024-12-12 19:37:56.631245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.039 [2024-12-12 19:37:56.631255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.039 [2024-12-12 19:37:56.633363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.039 [2024-12-12 19:37:56.633405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.039 BaseBdev2 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.039 BaseBdev3_malloc 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.039 true 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.039 [2024-12-12 19:37:56.710282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.039 [2024-12-12 19:37:56.710393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.039 [2024-12-12 19:37:56.710415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.039 [2024-12-12 19:37:56.710426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.039 [2024-12-12 19:37:56.712446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.039 [2024-12-12 19:37:56.712488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.039 BaseBdev3 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.039 [2024-12-12 19:37:56.722339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.039 [2024-12-12 19:37:56.724068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.039 [2024-12-12 19:37:56.724138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.039 [2024-12-12 19:37:56.724349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.039 [2024-12-12 19:37:56.724361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.039 [2024-12-12 19:37:56.724600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:14.039 [2024-12-12 19:37:56.724759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.039 [2024-12-12 19:37:56.724772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:14.039 [2024-12-12 19:37:56.724920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.039 "name": "raid_bdev1", 00:09:14.039 "uuid": "358c2912-846f-496a-960b-263b8cac3473", 00:09:14.039 "strip_size_kb": 64, 00:09:14.039 "state": "online", 00:09:14.039 "raid_level": "concat", 00:09:14.039 "superblock": true, 00:09:14.039 "num_base_bdevs": 3, 00:09:14.039 "num_base_bdevs_discovered": 3, 00:09:14.039 "num_base_bdevs_operational": 3, 00:09:14.039 "base_bdevs_list": [ 00:09:14.039 { 00:09:14.039 "name": "BaseBdev1", 00:09:14.039 "uuid": "12e475b2-c3f3-5139-82ef-84d4098a0d3a", 00:09:14.039 "is_configured": true, 00:09:14.039 "data_offset": 2048, 00:09:14.039 "data_size": 63488 00:09:14.039 }, 00:09:14.039 { 00:09:14.039 "name": "BaseBdev2", 00:09:14.039 "uuid": "00757d5e-8a3b-5d97-988f-c661bd207edd", 00:09:14.039 "is_configured": true, 00:09:14.039 "data_offset": 2048, 00:09:14.039 "data_size": 63488 00:09:14.039 }, 00:09:14.039 { 00:09:14.039 "name": "BaseBdev3", 00:09:14.039 "uuid": "6526823e-37ca-5952-b9d1-49b6799ea716", 00:09:14.039 "is_configured": true, 00:09:14.039 "data_offset": 2048, 00:09:14.039 "data_size": 63488 00:09:14.039 } 00:09:14.039 ] 00:09:14.039 }' 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.039 19:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 19:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.608 19:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.608 [2024-12-12 19:37:57.282728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.548 "name": "raid_bdev1", 00:09:15.548 "uuid": "358c2912-846f-496a-960b-263b8cac3473", 00:09:15.548 "strip_size_kb": 64, 00:09:15.548 "state": "online", 00:09:15.548 "raid_level": "concat", 00:09:15.548 "superblock": true, 00:09:15.548 "num_base_bdevs": 3, 00:09:15.548 "num_base_bdevs_discovered": 3, 00:09:15.548 "num_base_bdevs_operational": 3, 00:09:15.548 "base_bdevs_list": [ 00:09:15.548 { 00:09:15.548 "name": "BaseBdev1", 00:09:15.548 "uuid": "12e475b2-c3f3-5139-82ef-84d4098a0d3a", 00:09:15.548 "is_configured": true, 00:09:15.548 "data_offset": 2048, 00:09:15.548 "data_size": 63488 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "name": "BaseBdev2", 00:09:15.548 "uuid": "00757d5e-8a3b-5d97-988f-c661bd207edd", 00:09:15.548 "is_configured": true, 00:09:15.548 "data_offset": 2048, 00:09:15.548 "data_size": 63488 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "name": "BaseBdev3", 00:09:15.548 "uuid": "6526823e-37ca-5952-b9d1-49b6799ea716", 00:09:15.548 "is_configured": true, 00:09:15.548 "data_offset": 2048, 00:09:15.548 "data_size": 63488 00:09:15.548 } 00:09:15.548 ] 00:09:15.548 }' 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.548 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.128 [2024-12-12 19:37:58.662729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.128 [2024-12-12 19:37:58.662828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.128 [2024-12-12 19:37:58.665896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.128 [2024-12-12 19:37:58.665994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.128 [2024-12-12 19:37:58.666085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.128 [2024-12-12 19:37:58.666129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:16.128 { 00:09:16.128 "results": [ 00:09:16.128 { 00:09:16.128 "job": "raid_bdev1", 00:09:16.128 "core_mask": "0x1", 00:09:16.128 "workload": "randrw", 00:09:16.128 "percentage": 50, 00:09:16.128 "status": "finished", 00:09:16.128 "queue_depth": 1, 00:09:16.128 "io_size": 131072, 00:09:16.128 "runtime": 1.381007, 00:09:16.128 "iops": 15615.416866098434, 00:09:16.128 "mibps": 1951.9271082623043, 00:09:16.128 "io_failed": 1, 00:09:16.128 "io_timeout": 0, 00:09:16.128 "avg_latency_us": 88.78345738298235, 00:09:16.128 "min_latency_us": 25.4882096069869, 00:09:16.128 "max_latency_us": 1380.8349344978167 00:09:16.128 } 00:09:16.128 ], 00:09:16.128 "core_count": 1 00:09:16.128 } 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68799 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68799 ']' 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68799 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68799 00:09:16.128 killing process with pid 68799 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68799' 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68799 00:09:16.128 [2024-12-12 19:37:58.712991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.128 19:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68799 00:09:16.128 [2024-12-12 19:37:58.942730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sVSIIGGq9q 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:17.521 00:09:17.521 real 0m4.546s 00:09:17.521 user 0m5.399s 00:09:17.521 sys 0m0.583s 00:09:17.521 ************************************ 00:09:17.521 END TEST raid_read_error_test 00:09:17.521 ************************************ 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.521 19:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 19:38:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:17.521 19:38:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.521 19:38:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.521 19:38:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 ************************************ 00:09:17.521 START TEST raid_write_error_test 00:09:17.521 ************************************ 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ytUQYY1ogN 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68945 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68945 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68945 ']' 00:09:17.521 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.522 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.522 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.522 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.522 19:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 [2024-12-12 19:38:00.298647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:17.522 [2024-12-12 19:38:00.298918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68945 ] 00:09:17.780 [2024-12-12 19:38:00.473733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.780 [2024-12-12 19:38:00.591350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.040 [2024-12-12 19:38:00.791446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.040 [2024-12-12 19:38:00.791613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 BaseBdev1_malloc 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 true 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.611 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 [2024-12-12 19:38:01.267196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.611 [2024-12-12 19:38:01.267253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.611 [2024-12-12 19:38:01.267291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.612 [2024-12-12 19:38:01.267302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.612 [2024-12-12 19:38:01.269443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.612 [2024-12-12 19:38:01.269488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.612 BaseBdev1 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 BaseBdev2_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 true 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 [2024-12-12 19:38:01.334239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.612 [2024-12-12 19:38:01.334347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.612 [2024-12-12 19:38:01.334386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.612 [2024-12-12 19:38:01.334399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.612 [2024-12-12 19:38:01.336732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.612 [2024-12-12 19:38:01.336773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.612 BaseBdev2 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 BaseBdev3_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 true 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 [2024-12-12 19:38:01.412729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.612 [2024-12-12 19:38:01.412801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.612 [2024-12-12 19:38:01.412824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:18.612 [2024-12-12 19:38:01.412836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.612 [2024-12-12 19:38:01.415282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.612 [2024-12-12 19:38:01.415331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.612 BaseBdev3 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 [2024-12-12 19:38:01.424792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.612 [2024-12-12 19:38:01.426911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.612 [2024-12-12 19:38:01.426994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.612 [2024-12-12 19:38:01.427234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:18.612 [2024-12-12 19:38:01.427248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.612 [2024-12-12 19:38:01.427578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:18.612 [2024-12-12 19:38:01.427762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:18.612 [2024-12-12 19:38:01.427775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:18.612 [2024-12-12 19:38:01.427955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.612 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.871 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.871 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.871 "name": "raid_bdev1", 00:09:18.871 "uuid": "69ae855f-81a0-43b4-bb63-af4750233d8c", 00:09:18.871 "strip_size_kb": 64, 00:09:18.871 "state": "online", 00:09:18.871 "raid_level": "concat", 00:09:18.871 "superblock": true, 00:09:18.871 "num_base_bdevs": 3, 00:09:18.871 "num_base_bdevs_discovered": 3, 00:09:18.871 "num_base_bdevs_operational": 3, 00:09:18.871 "base_bdevs_list": [ 00:09:18.871 { 00:09:18.871 "name": "BaseBdev1", 00:09:18.871 "uuid": "65126ded-8f8e-532d-904f-6cf6bca3f3ff", 00:09:18.871 "is_configured": true, 00:09:18.871 "data_offset": 2048, 00:09:18.871 "data_size": 63488 00:09:18.871 }, 00:09:18.871 { 00:09:18.871 "name": "BaseBdev2", 00:09:18.871 "uuid": "6111abdc-b366-5a0a-8a58-ff9e607844d6", 00:09:18.871 "is_configured": true, 00:09:18.871 "data_offset": 2048, 00:09:18.871 "data_size": 63488 00:09:18.871 }, 00:09:18.871 { 00:09:18.871 "name": "BaseBdev3", 00:09:18.871 "uuid": "3925485f-8952-5980-a3bb-1edcf238ff89", 00:09:18.871 "is_configured": true, 00:09:18.871 "data_offset": 2048, 00:09:18.871 "data_size": 63488 00:09:18.871 } 00:09:18.871 ] 00:09:18.871 }' 00:09:18.871 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.871 19:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.129 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:19.129 19:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.388 [2024-12-12 19:38:01.989121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.325 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.326 "name": "raid_bdev1", 00:09:20.326 "uuid": "69ae855f-81a0-43b4-bb63-af4750233d8c", 00:09:20.326 "strip_size_kb": 64, 00:09:20.326 "state": "online", 00:09:20.326 "raid_level": "concat", 00:09:20.326 "superblock": true, 00:09:20.326 "num_base_bdevs": 3, 00:09:20.326 "num_base_bdevs_discovered": 3, 00:09:20.326 "num_base_bdevs_operational": 3, 00:09:20.326 "base_bdevs_list": [ 00:09:20.326 { 00:09:20.326 "name": "BaseBdev1", 00:09:20.326 "uuid": "65126ded-8f8e-532d-904f-6cf6bca3f3ff", 00:09:20.326 "is_configured": true, 00:09:20.326 "data_offset": 2048, 00:09:20.326 "data_size": 63488 00:09:20.326 }, 00:09:20.326 { 00:09:20.326 "name": "BaseBdev2", 00:09:20.326 "uuid": "6111abdc-b366-5a0a-8a58-ff9e607844d6", 00:09:20.326 "is_configured": true, 00:09:20.326 "data_offset": 2048, 00:09:20.326 "data_size": 63488 00:09:20.326 }, 00:09:20.326 { 00:09:20.326 "name": "BaseBdev3", 00:09:20.326 "uuid": "3925485f-8952-5980-a3bb-1edcf238ff89", 00:09:20.326 "is_configured": true, 00:09:20.326 "data_offset": 2048, 00:09:20.326 "data_size": 63488 00:09:20.326 } 00:09:20.326 ] 00:09:20.326 }' 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.326 19:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.585 [2024-12-12 19:38:03.369281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.585 [2024-12-12 19:38:03.369383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.585 [2024-12-12 19:38:03.372219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.585 [2024-12-12 19:38:03.372323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.585 [2024-12-12 19:38:03.372382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.585 [2024-12-12 19:38:03.372435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:20.585 { 00:09:20.585 "results": [ 00:09:20.585 { 00:09:20.585 "job": "raid_bdev1", 00:09:20.585 "core_mask": "0x1", 00:09:20.585 "workload": "randrw", 00:09:20.585 "percentage": 50, 00:09:20.585 "status": "finished", 00:09:20.585 "queue_depth": 1, 00:09:20.585 "io_size": 131072, 00:09:20.585 "runtime": 1.381189, 00:09:20.585 "iops": 15119.58175166469, 00:09:20.585 "mibps": 1889.9477189580862, 00:09:20.585 "io_failed": 1, 00:09:20.585 "io_timeout": 0, 00:09:20.585 "avg_latency_us": 91.63667921536222, 00:09:20.585 "min_latency_us": 26.941484716157206, 00:09:20.585 "max_latency_us": 1438.071615720524 00:09:20.585 } 00:09:20.585 ], 00:09:20.585 "core_count": 1 00:09:20.585 } 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68945 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68945 ']' 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68945 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68945 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68945' 00:09:20.585 killing process with pid 68945 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68945 00:09:20.585 [2024-12-12 19:38:03.416286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.585 19:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68945 00:09:20.845 [2024-12-12 19:38:03.650432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ytUQYY1ogN 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:22.224 00:09:22.224 real 0m4.657s 00:09:22.224 user 0m5.593s 00:09:22.224 sys 0m0.574s 00:09:22.224 ************************************ 00:09:22.224 END TEST raid_write_error_test 00:09:22.224 ************************************ 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.224 19:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.224 19:38:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:22.224 19:38:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:22.224 19:38:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.224 19:38:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.224 19:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.224 ************************************ 00:09:22.224 START TEST raid_state_function_test 00:09:22.224 ************************************ 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.224 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69088 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69088' 00:09:22.225 Process raid pid: 69088 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69088 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69088 ']' 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.225 19:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.225 [2024-12-12 19:38:05.010231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:22.225 [2024-12-12 19:38:05.010443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.484 [2024-12-12 19:38:05.191192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.484 [2024-12-12 19:38:05.306071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.743 [2024-12-12 19:38:05.513969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.743 [2024-12-12 19:38:05.514106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.311 [2024-12-12 19:38:05.866496] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.311 [2024-12-12 19:38:05.866557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.311 [2024-12-12 19:38:05.866570] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.311 [2024-12-12 19:38:05.866579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.311 [2024-12-12 19:38:05.866601] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.311 [2024-12-12 19:38:05.866610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.311 "name": "Existed_Raid", 00:09:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.311 "strip_size_kb": 0, 00:09:23.311 "state": "configuring", 00:09:23.311 "raid_level": "raid1", 00:09:23.311 "superblock": false, 00:09:23.311 "num_base_bdevs": 3, 00:09:23.311 "num_base_bdevs_discovered": 0, 00:09:23.311 "num_base_bdevs_operational": 3, 00:09:23.311 "base_bdevs_list": [ 00:09:23.311 { 00:09:23.311 "name": "BaseBdev1", 00:09:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.311 "is_configured": false, 00:09:23.311 "data_offset": 0, 00:09:23.311 "data_size": 0 00:09:23.311 }, 00:09:23.311 { 00:09:23.311 "name": "BaseBdev2", 00:09:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.311 "is_configured": false, 00:09:23.311 "data_offset": 0, 00:09:23.311 "data_size": 0 00:09:23.311 }, 00:09:23.311 { 00:09:23.311 "name": "BaseBdev3", 00:09:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.311 "is_configured": false, 00:09:23.311 "data_offset": 0, 00:09:23.311 "data_size": 0 00:09:23.311 } 00:09:23.311 ] 00:09:23.311 }' 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.311 19:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.595 [2024-12-12 19:38:06.317771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.595 [2024-12-12 19:38:06.317812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.595 [2024-12-12 19:38:06.329757] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.595 [2024-12-12 19:38:06.329860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.595 [2024-12-12 19:38:06.329875] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.595 [2024-12-12 19:38:06.329885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.595 [2024-12-12 19:38:06.329891] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.595 [2024-12-12 19:38:06.329900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.595 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.596 [2024-12-12 19:38:06.377029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.596 BaseBdev1 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.596 [ 00:09:23.596 { 00:09:23.596 "name": "BaseBdev1", 00:09:23.596 "aliases": [ 00:09:23.596 "43162847-0e30-4919-aca5-bf7219db9671" 00:09:23.596 ], 00:09:23.596 "product_name": "Malloc disk", 00:09:23.596 "block_size": 512, 00:09:23.596 "num_blocks": 65536, 00:09:23.596 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:23.596 "assigned_rate_limits": { 00:09:23.596 "rw_ios_per_sec": 0, 00:09:23.596 "rw_mbytes_per_sec": 0, 00:09:23.596 "r_mbytes_per_sec": 0, 00:09:23.596 "w_mbytes_per_sec": 0 00:09:23.596 }, 00:09:23.596 "claimed": true, 00:09:23.596 "claim_type": "exclusive_write", 00:09:23.596 "zoned": false, 00:09:23.596 "supported_io_types": { 00:09:23.596 "read": true, 00:09:23.596 "write": true, 00:09:23.596 "unmap": true, 00:09:23.596 "flush": true, 00:09:23.596 "reset": true, 00:09:23.596 "nvme_admin": false, 00:09:23.596 "nvme_io": false, 00:09:23.596 "nvme_io_md": false, 00:09:23.596 "write_zeroes": true, 00:09:23.596 "zcopy": true, 00:09:23.596 "get_zone_info": false, 00:09:23.596 "zone_management": false, 00:09:23.596 "zone_append": false, 00:09:23.596 "compare": false, 00:09:23.596 "compare_and_write": false, 00:09:23.596 "abort": true, 00:09:23.596 "seek_hole": false, 00:09:23.596 "seek_data": false, 00:09:23.596 "copy": true, 00:09:23.596 "nvme_iov_md": false 00:09:23.596 }, 00:09:23.596 "memory_domains": [ 00:09:23.596 { 00:09:23.596 "dma_device_id": "system", 00:09:23.596 "dma_device_type": 1 00:09:23.596 }, 00:09:23.596 { 00:09:23.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.596 "dma_device_type": 2 00:09:23.596 } 00:09:23.596 ], 00:09:23.596 "driver_specific": {} 00:09:23.596 } 00:09:23.596 ] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.596 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.880 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.880 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.880 "name": "Existed_Raid", 00:09:23.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.880 "strip_size_kb": 0, 00:09:23.880 "state": "configuring", 00:09:23.880 "raid_level": "raid1", 00:09:23.880 "superblock": false, 00:09:23.880 "num_base_bdevs": 3, 00:09:23.880 "num_base_bdevs_discovered": 1, 00:09:23.880 "num_base_bdevs_operational": 3, 00:09:23.880 "base_bdevs_list": [ 00:09:23.880 { 00:09:23.880 "name": "BaseBdev1", 00:09:23.880 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:23.880 "is_configured": true, 00:09:23.880 "data_offset": 0, 00:09:23.880 "data_size": 65536 00:09:23.880 }, 00:09:23.880 { 00:09:23.880 "name": "BaseBdev2", 00:09:23.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.880 "is_configured": false, 00:09:23.880 "data_offset": 0, 00:09:23.880 "data_size": 0 00:09:23.880 }, 00:09:23.880 { 00:09:23.880 "name": "BaseBdev3", 00:09:23.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.880 "is_configured": false, 00:09:23.880 "data_offset": 0, 00:09:23.880 "data_size": 0 00:09:23.880 } 00:09:23.880 ] 00:09:23.880 }' 00:09:23.880 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.880 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 [2024-12-12 19:38:06.864275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.140 [2024-12-12 19:38:06.864330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 [2024-12-12 19:38:06.872297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.140 [2024-12-12 19:38:06.874296] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.140 [2024-12-12 19:38:06.874381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.140 [2024-12-12 19:38:06.874431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.140 [2024-12-12 19:38:06.874474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.140 "name": "Existed_Raid", 00:09:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.140 "strip_size_kb": 0, 00:09:24.140 "state": "configuring", 00:09:24.140 "raid_level": "raid1", 00:09:24.140 "superblock": false, 00:09:24.140 "num_base_bdevs": 3, 00:09:24.140 "num_base_bdevs_discovered": 1, 00:09:24.140 "num_base_bdevs_operational": 3, 00:09:24.140 "base_bdevs_list": [ 00:09:24.140 { 00:09:24.140 "name": "BaseBdev1", 00:09:24.140 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:24.140 "is_configured": true, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 65536 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": "BaseBdev2", 00:09:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.140 "is_configured": false, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 0 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": "BaseBdev3", 00:09:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.140 "is_configured": false, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 0 00:09:24.140 } 00:09:24.140 ] 00:09:24.140 }' 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.140 19:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.710 [2024-12-12 19:38:07.382194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.710 BaseBdev2 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.710 [ 00:09:24.710 { 00:09:24.710 "name": "BaseBdev2", 00:09:24.710 "aliases": [ 00:09:24.710 "cfaae9c8-8a73-4844-b10f-17677a39c96a" 00:09:24.710 ], 00:09:24.710 "product_name": "Malloc disk", 00:09:24.710 "block_size": 512, 00:09:24.710 "num_blocks": 65536, 00:09:24.710 "uuid": "cfaae9c8-8a73-4844-b10f-17677a39c96a", 00:09:24.710 "assigned_rate_limits": { 00:09:24.710 "rw_ios_per_sec": 0, 00:09:24.710 "rw_mbytes_per_sec": 0, 00:09:24.710 "r_mbytes_per_sec": 0, 00:09:24.710 "w_mbytes_per_sec": 0 00:09:24.710 }, 00:09:24.710 "claimed": true, 00:09:24.710 "claim_type": "exclusive_write", 00:09:24.710 "zoned": false, 00:09:24.710 "supported_io_types": { 00:09:24.710 "read": true, 00:09:24.710 "write": true, 00:09:24.710 "unmap": true, 00:09:24.710 "flush": true, 00:09:24.710 "reset": true, 00:09:24.710 "nvme_admin": false, 00:09:24.710 "nvme_io": false, 00:09:24.710 "nvme_io_md": false, 00:09:24.710 "write_zeroes": true, 00:09:24.710 "zcopy": true, 00:09:24.710 "get_zone_info": false, 00:09:24.710 "zone_management": false, 00:09:24.710 "zone_append": false, 00:09:24.710 "compare": false, 00:09:24.710 "compare_and_write": false, 00:09:24.710 "abort": true, 00:09:24.710 "seek_hole": false, 00:09:24.710 "seek_data": false, 00:09:24.710 "copy": true, 00:09:24.710 "nvme_iov_md": false 00:09:24.710 }, 00:09:24.710 "memory_domains": [ 00:09:24.710 { 00:09:24.710 "dma_device_id": "system", 00:09:24.710 "dma_device_type": 1 00:09:24.710 }, 00:09:24.710 { 00:09:24.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.710 "dma_device_type": 2 00:09:24.710 } 00:09:24.710 ], 00:09:24.710 "driver_specific": {} 00:09:24.710 } 00:09:24.710 ] 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.710 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.711 "name": "Existed_Raid", 00:09:24.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.711 "strip_size_kb": 0, 00:09:24.711 "state": "configuring", 00:09:24.711 "raid_level": "raid1", 00:09:24.711 "superblock": false, 00:09:24.711 "num_base_bdevs": 3, 00:09:24.711 "num_base_bdevs_discovered": 2, 00:09:24.711 "num_base_bdevs_operational": 3, 00:09:24.711 "base_bdevs_list": [ 00:09:24.711 { 00:09:24.711 "name": "BaseBdev1", 00:09:24.711 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:24.711 "is_configured": true, 00:09:24.711 "data_offset": 0, 00:09:24.711 "data_size": 65536 00:09:24.711 }, 00:09:24.711 { 00:09:24.711 "name": "BaseBdev2", 00:09:24.711 "uuid": "cfaae9c8-8a73-4844-b10f-17677a39c96a", 00:09:24.711 "is_configured": true, 00:09:24.711 "data_offset": 0, 00:09:24.711 "data_size": 65536 00:09:24.711 }, 00:09:24.711 { 00:09:24.711 "name": "BaseBdev3", 00:09:24.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.711 "is_configured": false, 00:09:24.711 "data_offset": 0, 00:09:24.711 "data_size": 0 00:09:24.711 } 00:09:24.711 ] 00:09:24.711 }' 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.711 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 [2024-12-12 19:38:07.915635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.280 [2024-12-12 19:38:07.915761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:25.280 [2024-12-12 19:38:07.915791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:25.280 [2024-12-12 19:38:07.916142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:25.280 [2024-12-12 19:38:07.916382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:25.280 [2024-12-12 19:38:07.916421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:25.280 [2024-12-12 19:38:07.916804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.280 BaseBdev3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 [ 00:09:25.280 { 00:09:25.280 "name": "BaseBdev3", 00:09:25.280 "aliases": [ 00:09:25.280 "6ad02f2b-2310-4a89-ade3-6e036e5f7445" 00:09:25.280 ], 00:09:25.280 "product_name": "Malloc disk", 00:09:25.280 "block_size": 512, 00:09:25.280 "num_blocks": 65536, 00:09:25.280 "uuid": "6ad02f2b-2310-4a89-ade3-6e036e5f7445", 00:09:25.280 "assigned_rate_limits": { 00:09:25.280 "rw_ios_per_sec": 0, 00:09:25.280 "rw_mbytes_per_sec": 0, 00:09:25.280 "r_mbytes_per_sec": 0, 00:09:25.280 "w_mbytes_per_sec": 0 00:09:25.280 }, 00:09:25.280 "claimed": true, 00:09:25.280 "claim_type": "exclusive_write", 00:09:25.280 "zoned": false, 00:09:25.280 "supported_io_types": { 00:09:25.280 "read": true, 00:09:25.280 "write": true, 00:09:25.280 "unmap": true, 00:09:25.280 "flush": true, 00:09:25.280 "reset": true, 00:09:25.280 "nvme_admin": false, 00:09:25.280 "nvme_io": false, 00:09:25.280 "nvme_io_md": false, 00:09:25.280 "write_zeroes": true, 00:09:25.280 "zcopy": true, 00:09:25.280 "get_zone_info": false, 00:09:25.280 "zone_management": false, 00:09:25.280 "zone_append": false, 00:09:25.280 "compare": false, 00:09:25.280 "compare_and_write": false, 00:09:25.280 "abort": true, 00:09:25.280 "seek_hole": false, 00:09:25.280 "seek_data": false, 00:09:25.280 "copy": true, 00:09:25.280 "nvme_iov_md": false 00:09:25.280 }, 00:09:25.280 "memory_domains": [ 00:09:25.280 { 00:09:25.280 "dma_device_id": "system", 00:09:25.280 "dma_device_type": 1 00:09:25.280 }, 00:09:25.280 { 00:09:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.280 "dma_device_type": 2 00:09:25.280 } 00:09:25.280 ], 00:09:25.280 "driver_specific": {} 00:09:25.280 } 00:09:25.280 ] 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 19:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.280 "name": "Existed_Raid", 00:09:25.280 "uuid": "9fb47920-db34-48ba-9f6d-21d8a7f99476", 00:09:25.280 "strip_size_kb": 0, 00:09:25.280 "state": "online", 00:09:25.280 "raid_level": "raid1", 00:09:25.280 "superblock": false, 00:09:25.280 "num_base_bdevs": 3, 00:09:25.280 "num_base_bdevs_discovered": 3, 00:09:25.280 "num_base_bdevs_operational": 3, 00:09:25.280 "base_bdevs_list": [ 00:09:25.280 { 00:09:25.280 "name": "BaseBdev1", 00:09:25.280 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:25.280 "is_configured": true, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 }, 00:09:25.280 { 00:09:25.280 "name": "BaseBdev2", 00:09:25.280 "uuid": "cfaae9c8-8a73-4844-b10f-17677a39c96a", 00:09:25.280 "is_configured": true, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 }, 00:09:25.280 { 00:09:25.280 "name": "BaseBdev3", 00:09:25.280 "uuid": "6ad02f2b-2310-4a89-ade3-6e036e5f7445", 00:09:25.280 "is_configured": true, 00:09:25.280 "data_offset": 0, 00:09:25.280 "data_size": 65536 00:09:25.280 } 00:09:25.280 ] 00:09:25.281 }' 00:09:25.281 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.281 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.540 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.540 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.540 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.800 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 [2024-12-12 19:38:08.395215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.801 "name": "Existed_Raid", 00:09:25.801 "aliases": [ 00:09:25.801 "9fb47920-db34-48ba-9f6d-21d8a7f99476" 00:09:25.801 ], 00:09:25.801 "product_name": "Raid Volume", 00:09:25.801 "block_size": 512, 00:09:25.801 "num_blocks": 65536, 00:09:25.801 "uuid": "9fb47920-db34-48ba-9f6d-21d8a7f99476", 00:09:25.801 "assigned_rate_limits": { 00:09:25.801 "rw_ios_per_sec": 0, 00:09:25.801 "rw_mbytes_per_sec": 0, 00:09:25.801 "r_mbytes_per_sec": 0, 00:09:25.801 "w_mbytes_per_sec": 0 00:09:25.801 }, 00:09:25.801 "claimed": false, 00:09:25.801 "zoned": false, 00:09:25.801 "supported_io_types": { 00:09:25.801 "read": true, 00:09:25.801 "write": true, 00:09:25.801 "unmap": false, 00:09:25.801 "flush": false, 00:09:25.801 "reset": true, 00:09:25.801 "nvme_admin": false, 00:09:25.801 "nvme_io": false, 00:09:25.801 "nvme_io_md": false, 00:09:25.801 "write_zeroes": true, 00:09:25.801 "zcopy": false, 00:09:25.801 "get_zone_info": false, 00:09:25.801 "zone_management": false, 00:09:25.801 "zone_append": false, 00:09:25.801 "compare": false, 00:09:25.801 "compare_and_write": false, 00:09:25.801 "abort": false, 00:09:25.801 "seek_hole": false, 00:09:25.801 "seek_data": false, 00:09:25.801 "copy": false, 00:09:25.801 "nvme_iov_md": false 00:09:25.801 }, 00:09:25.801 "memory_domains": [ 00:09:25.801 { 00:09:25.801 "dma_device_id": "system", 00:09:25.801 "dma_device_type": 1 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.801 "dma_device_type": 2 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "dma_device_id": "system", 00:09:25.801 "dma_device_type": 1 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.801 "dma_device_type": 2 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "dma_device_id": "system", 00:09:25.801 "dma_device_type": 1 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.801 "dma_device_type": 2 00:09:25.801 } 00:09:25.801 ], 00:09:25.801 "driver_specific": { 00:09:25.801 "raid": { 00:09:25.801 "uuid": "9fb47920-db34-48ba-9f6d-21d8a7f99476", 00:09:25.801 "strip_size_kb": 0, 00:09:25.801 "state": "online", 00:09:25.801 "raid_level": "raid1", 00:09:25.801 "superblock": false, 00:09:25.801 "num_base_bdevs": 3, 00:09:25.801 "num_base_bdevs_discovered": 3, 00:09:25.801 "num_base_bdevs_operational": 3, 00:09:25.801 "base_bdevs_list": [ 00:09:25.801 { 00:09:25.801 "name": "BaseBdev1", 00:09:25.801 "uuid": "43162847-0e30-4919-aca5-bf7219db9671", 00:09:25.801 "is_configured": true, 00:09:25.801 "data_offset": 0, 00:09:25.801 "data_size": 65536 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "name": "BaseBdev2", 00:09:25.801 "uuid": "cfaae9c8-8a73-4844-b10f-17677a39c96a", 00:09:25.801 "is_configured": true, 00:09:25.801 "data_offset": 0, 00:09:25.801 "data_size": 65536 00:09:25.801 }, 00:09:25.801 { 00:09:25.801 "name": "BaseBdev3", 00:09:25.801 "uuid": "6ad02f2b-2310-4a89-ade3-6e036e5f7445", 00:09:25.801 "is_configured": true, 00:09:25.801 "data_offset": 0, 00:09:25.801 "data_size": 65536 00:09:25.801 } 00:09:25.801 ] 00:09:25.801 } 00:09:25.801 } 00:09:25.801 }' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.801 BaseBdev2 00:09:25.801 BaseBdev3' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.061 [2024-12-12 19:38:08.678481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.061 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.062 "name": "Existed_Raid", 00:09:26.062 "uuid": "9fb47920-db34-48ba-9f6d-21d8a7f99476", 00:09:26.062 "strip_size_kb": 0, 00:09:26.062 "state": "online", 00:09:26.062 "raid_level": "raid1", 00:09:26.062 "superblock": false, 00:09:26.062 "num_base_bdevs": 3, 00:09:26.062 "num_base_bdevs_discovered": 2, 00:09:26.062 "num_base_bdevs_operational": 2, 00:09:26.062 "base_bdevs_list": [ 00:09:26.062 { 00:09:26.062 "name": null, 00:09:26.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.062 "is_configured": false, 00:09:26.062 "data_offset": 0, 00:09:26.062 "data_size": 65536 00:09:26.062 }, 00:09:26.062 { 00:09:26.062 "name": "BaseBdev2", 00:09:26.062 "uuid": "cfaae9c8-8a73-4844-b10f-17677a39c96a", 00:09:26.062 "is_configured": true, 00:09:26.062 "data_offset": 0, 00:09:26.062 "data_size": 65536 00:09:26.062 }, 00:09:26.062 { 00:09:26.062 "name": "BaseBdev3", 00:09:26.062 "uuid": "6ad02f2b-2310-4a89-ade3-6e036e5f7445", 00:09:26.062 "is_configured": true, 00:09:26.062 "data_offset": 0, 00:09:26.062 "data_size": 65536 00:09:26.062 } 00:09:26.062 ] 00:09:26.062 }' 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.062 19:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.632 [2024-12-12 19:38:09.282146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.632 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.632 [2024-12-12 19:38:09.445903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.632 [2024-12-12 19:38:09.446036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.892 [2024-12-12 19:38:09.550242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.892 [2024-12-12 19:38:09.550306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.892 [2024-12-12 19:38:09.550320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 BaseBdev2 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 [ 00:09:26.892 { 00:09:26.892 "name": "BaseBdev2", 00:09:26.892 "aliases": [ 00:09:26.892 "80623284-d815-47d1-8f6e-9027614fc543" 00:09:26.892 ], 00:09:26.892 "product_name": "Malloc disk", 00:09:26.892 "block_size": 512, 00:09:26.892 "num_blocks": 65536, 00:09:26.892 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:26.892 "assigned_rate_limits": { 00:09:26.892 "rw_ios_per_sec": 0, 00:09:26.892 "rw_mbytes_per_sec": 0, 00:09:26.892 "r_mbytes_per_sec": 0, 00:09:26.892 "w_mbytes_per_sec": 0 00:09:26.892 }, 00:09:26.892 "claimed": false, 00:09:26.892 "zoned": false, 00:09:26.892 "supported_io_types": { 00:09:26.892 "read": true, 00:09:26.892 "write": true, 00:09:26.892 "unmap": true, 00:09:26.892 "flush": true, 00:09:26.892 "reset": true, 00:09:26.892 "nvme_admin": false, 00:09:26.892 "nvme_io": false, 00:09:26.892 "nvme_io_md": false, 00:09:26.892 "write_zeroes": true, 00:09:26.892 "zcopy": true, 00:09:26.892 "get_zone_info": false, 00:09:26.892 "zone_management": false, 00:09:26.892 "zone_append": false, 00:09:26.892 "compare": false, 00:09:26.892 "compare_and_write": false, 00:09:26.892 "abort": true, 00:09:26.892 "seek_hole": false, 00:09:26.892 "seek_data": false, 00:09:26.892 "copy": true, 00:09:26.892 "nvme_iov_md": false 00:09:26.892 }, 00:09:26.892 "memory_domains": [ 00:09:26.892 { 00:09:26.892 "dma_device_id": "system", 00:09:26.892 "dma_device_type": 1 00:09:26.892 }, 00:09:26.892 { 00:09:26.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.892 "dma_device_type": 2 00:09:26.892 } 00:09:26.892 ], 00:09:26.892 "driver_specific": {} 00:09:26.892 } 00:09:26.892 ] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 BaseBdev3 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 [ 00:09:27.153 { 00:09:27.153 "name": "BaseBdev3", 00:09:27.153 "aliases": [ 00:09:27.153 "3151dc39-fd96-45ad-8c11-36f407920baa" 00:09:27.153 ], 00:09:27.153 "product_name": "Malloc disk", 00:09:27.153 "block_size": 512, 00:09:27.153 "num_blocks": 65536, 00:09:27.153 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:27.153 "assigned_rate_limits": { 00:09:27.153 "rw_ios_per_sec": 0, 00:09:27.153 "rw_mbytes_per_sec": 0, 00:09:27.153 "r_mbytes_per_sec": 0, 00:09:27.153 "w_mbytes_per_sec": 0 00:09:27.153 }, 00:09:27.153 "claimed": false, 00:09:27.153 "zoned": false, 00:09:27.153 "supported_io_types": { 00:09:27.153 "read": true, 00:09:27.153 "write": true, 00:09:27.153 "unmap": true, 00:09:27.153 "flush": true, 00:09:27.153 "reset": true, 00:09:27.153 "nvme_admin": false, 00:09:27.153 "nvme_io": false, 00:09:27.153 "nvme_io_md": false, 00:09:27.153 "write_zeroes": true, 00:09:27.153 "zcopy": true, 00:09:27.153 "get_zone_info": false, 00:09:27.153 "zone_management": false, 00:09:27.153 "zone_append": false, 00:09:27.153 "compare": false, 00:09:27.153 "compare_and_write": false, 00:09:27.153 "abort": true, 00:09:27.153 "seek_hole": false, 00:09:27.153 "seek_data": false, 00:09:27.153 "copy": true, 00:09:27.153 "nvme_iov_md": false 00:09:27.153 }, 00:09:27.153 "memory_domains": [ 00:09:27.153 { 00:09:27.153 "dma_device_id": "system", 00:09:27.153 "dma_device_type": 1 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.153 "dma_device_type": 2 00:09:27.153 } 00:09:27.153 ], 00:09:27.153 "driver_specific": {} 00:09:27.153 } 00:09:27.153 ] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 [2024-12-12 19:38:09.778680] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.153 [2024-12-12 19:38:09.778811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.153 [2024-12-12 19:38:09.778857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.153 [2024-12-12 19:38:09.781020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.153 "name": "Existed_Raid", 00:09:27.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.153 "strip_size_kb": 0, 00:09:27.153 "state": "configuring", 00:09:27.153 "raid_level": "raid1", 00:09:27.153 "superblock": false, 00:09:27.153 "num_base_bdevs": 3, 00:09:27.153 "num_base_bdevs_discovered": 2, 00:09:27.153 "num_base_bdevs_operational": 3, 00:09:27.153 "base_bdevs_list": [ 00:09:27.153 { 00:09:27.153 "name": "BaseBdev1", 00:09:27.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.153 "is_configured": false, 00:09:27.153 "data_offset": 0, 00:09:27.153 "data_size": 0 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "name": "BaseBdev2", 00:09:27.153 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:27.153 "is_configured": true, 00:09:27.153 "data_offset": 0, 00:09:27.153 "data_size": 65536 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "name": "BaseBdev3", 00:09:27.153 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:27.153 "is_configured": true, 00:09:27.153 "data_offset": 0, 00:09:27.153 "data_size": 65536 00:09:27.153 } 00:09:27.153 ] 00:09:27.153 }' 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.153 19:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.413 [2024-12-12 19:38:10.230361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.413 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.673 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.673 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.673 "name": "Existed_Raid", 00:09:27.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.673 "strip_size_kb": 0, 00:09:27.673 "state": "configuring", 00:09:27.673 "raid_level": "raid1", 00:09:27.673 "superblock": false, 00:09:27.673 "num_base_bdevs": 3, 00:09:27.673 "num_base_bdevs_discovered": 1, 00:09:27.673 "num_base_bdevs_operational": 3, 00:09:27.673 "base_bdevs_list": [ 00:09:27.673 { 00:09:27.673 "name": "BaseBdev1", 00:09:27.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.673 "is_configured": false, 00:09:27.673 "data_offset": 0, 00:09:27.673 "data_size": 0 00:09:27.673 }, 00:09:27.673 { 00:09:27.673 "name": null, 00:09:27.673 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:27.673 "is_configured": false, 00:09:27.673 "data_offset": 0, 00:09:27.673 "data_size": 65536 00:09:27.673 }, 00:09:27.673 { 00:09:27.673 "name": "BaseBdev3", 00:09:27.673 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:27.673 "is_configured": true, 00:09:27.673 "data_offset": 0, 00:09:27.673 "data_size": 65536 00:09:27.673 } 00:09:27.673 ] 00:09:27.673 }' 00:09:27.673 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.673 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 [2024-12-12 19:38:10.761969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.933 BaseBdev1 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.193 [ 00:09:28.193 { 00:09:28.193 "name": "BaseBdev1", 00:09:28.193 "aliases": [ 00:09:28.193 "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b" 00:09:28.193 ], 00:09:28.193 "product_name": "Malloc disk", 00:09:28.193 "block_size": 512, 00:09:28.193 "num_blocks": 65536, 00:09:28.193 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:28.193 "assigned_rate_limits": { 00:09:28.193 "rw_ios_per_sec": 0, 00:09:28.193 "rw_mbytes_per_sec": 0, 00:09:28.193 "r_mbytes_per_sec": 0, 00:09:28.193 "w_mbytes_per_sec": 0 00:09:28.193 }, 00:09:28.193 "claimed": true, 00:09:28.193 "claim_type": "exclusive_write", 00:09:28.193 "zoned": false, 00:09:28.193 "supported_io_types": { 00:09:28.193 "read": true, 00:09:28.193 "write": true, 00:09:28.193 "unmap": true, 00:09:28.193 "flush": true, 00:09:28.194 "reset": true, 00:09:28.194 "nvme_admin": false, 00:09:28.194 "nvme_io": false, 00:09:28.194 "nvme_io_md": false, 00:09:28.194 "write_zeroes": true, 00:09:28.194 "zcopy": true, 00:09:28.194 "get_zone_info": false, 00:09:28.194 "zone_management": false, 00:09:28.194 "zone_append": false, 00:09:28.194 "compare": false, 00:09:28.194 "compare_and_write": false, 00:09:28.194 "abort": true, 00:09:28.194 "seek_hole": false, 00:09:28.194 "seek_data": false, 00:09:28.194 "copy": true, 00:09:28.194 "nvme_iov_md": false 00:09:28.194 }, 00:09:28.194 "memory_domains": [ 00:09:28.194 { 00:09:28.194 "dma_device_id": "system", 00:09:28.194 "dma_device_type": 1 00:09:28.194 }, 00:09:28.194 { 00:09:28.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.194 "dma_device_type": 2 00:09:28.194 } 00:09:28.194 ], 00:09:28.194 "driver_specific": {} 00:09:28.194 } 00:09:28.194 ] 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.194 "name": "Existed_Raid", 00:09:28.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.194 "strip_size_kb": 0, 00:09:28.194 "state": "configuring", 00:09:28.194 "raid_level": "raid1", 00:09:28.194 "superblock": false, 00:09:28.194 "num_base_bdevs": 3, 00:09:28.194 "num_base_bdevs_discovered": 2, 00:09:28.194 "num_base_bdevs_operational": 3, 00:09:28.194 "base_bdevs_list": [ 00:09:28.194 { 00:09:28.194 "name": "BaseBdev1", 00:09:28.194 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:28.194 "is_configured": true, 00:09:28.194 "data_offset": 0, 00:09:28.194 "data_size": 65536 00:09:28.194 }, 00:09:28.194 { 00:09:28.194 "name": null, 00:09:28.194 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:28.194 "is_configured": false, 00:09:28.194 "data_offset": 0, 00:09:28.194 "data_size": 65536 00:09:28.194 }, 00:09:28.194 { 00:09:28.194 "name": "BaseBdev3", 00:09:28.194 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:28.194 "is_configured": true, 00:09:28.194 "data_offset": 0, 00:09:28.194 "data_size": 65536 00:09:28.194 } 00:09:28.194 ] 00:09:28.194 }' 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.194 19:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.454 [2024-12-12 19:38:11.229202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.454 "name": "Existed_Raid", 00:09:28.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.454 "strip_size_kb": 0, 00:09:28.454 "state": "configuring", 00:09:28.454 "raid_level": "raid1", 00:09:28.454 "superblock": false, 00:09:28.454 "num_base_bdevs": 3, 00:09:28.454 "num_base_bdevs_discovered": 1, 00:09:28.454 "num_base_bdevs_operational": 3, 00:09:28.454 "base_bdevs_list": [ 00:09:28.454 { 00:09:28.454 "name": "BaseBdev1", 00:09:28.454 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:28.454 "is_configured": true, 00:09:28.454 "data_offset": 0, 00:09:28.454 "data_size": 65536 00:09:28.454 }, 00:09:28.454 { 00:09:28.454 "name": null, 00:09:28.454 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:28.454 "is_configured": false, 00:09:28.454 "data_offset": 0, 00:09:28.454 "data_size": 65536 00:09:28.454 }, 00:09:28.454 { 00:09:28.454 "name": null, 00:09:28.454 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:28.454 "is_configured": false, 00:09:28.454 "data_offset": 0, 00:09:28.454 "data_size": 65536 00:09:28.454 } 00:09:28.454 ] 00:09:28.454 }' 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.454 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.024 [2024-12-12 19:38:11.736397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.024 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.024 "name": "Existed_Raid", 00:09:29.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.024 "strip_size_kb": 0, 00:09:29.024 "state": "configuring", 00:09:29.024 "raid_level": "raid1", 00:09:29.025 "superblock": false, 00:09:29.025 "num_base_bdevs": 3, 00:09:29.025 "num_base_bdevs_discovered": 2, 00:09:29.025 "num_base_bdevs_operational": 3, 00:09:29.025 "base_bdevs_list": [ 00:09:29.025 { 00:09:29.025 "name": "BaseBdev1", 00:09:29.025 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:29.025 "is_configured": true, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 65536 00:09:29.025 }, 00:09:29.025 { 00:09:29.025 "name": null, 00:09:29.025 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:29.025 "is_configured": false, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 65536 00:09:29.025 }, 00:09:29.025 { 00:09:29.025 "name": "BaseBdev3", 00:09:29.025 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:29.025 "is_configured": true, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 65536 00:09:29.025 } 00:09:29.025 ] 00:09:29.025 }' 00:09:29.025 19:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.025 19:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.595 [2024-12-12 19:38:12.231641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.595 "name": "Existed_Raid", 00:09:29.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.595 "strip_size_kb": 0, 00:09:29.595 "state": "configuring", 00:09:29.595 "raid_level": "raid1", 00:09:29.595 "superblock": false, 00:09:29.595 "num_base_bdevs": 3, 00:09:29.595 "num_base_bdevs_discovered": 1, 00:09:29.595 "num_base_bdevs_operational": 3, 00:09:29.595 "base_bdevs_list": [ 00:09:29.595 { 00:09:29.595 "name": null, 00:09:29.595 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:29.595 "is_configured": false, 00:09:29.595 "data_offset": 0, 00:09:29.595 "data_size": 65536 00:09:29.595 }, 00:09:29.595 { 00:09:29.595 "name": null, 00:09:29.595 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:29.595 "is_configured": false, 00:09:29.595 "data_offset": 0, 00:09:29.595 "data_size": 65536 00:09:29.595 }, 00:09:29.595 { 00:09:29.595 "name": "BaseBdev3", 00:09:29.595 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:29.595 "is_configured": true, 00:09:29.595 "data_offset": 0, 00:09:29.595 "data_size": 65536 00:09:29.595 } 00:09:29.595 ] 00:09:29.595 }' 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.595 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 [2024-12-12 19:38:12.818265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.165 "name": "Existed_Raid", 00:09:30.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.165 "strip_size_kb": 0, 00:09:30.165 "state": "configuring", 00:09:30.165 "raid_level": "raid1", 00:09:30.165 "superblock": false, 00:09:30.165 "num_base_bdevs": 3, 00:09:30.165 "num_base_bdevs_discovered": 2, 00:09:30.165 "num_base_bdevs_operational": 3, 00:09:30.165 "base_bdevs_list": [ 00:09:30.165 { 00:09:30.165 "name": null, 00:09:30.165 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:30.165 "is_configured": false, 00:09:30.165 "data_offset": 0, 00:09:30.165 "data_size": 65536 00:09:30.165 }, 00:09:30.165 { 00:09:30.165 "name": "BaseBdev2", 00:09:30.165 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:30.165 "is_configured": true, 00:09:30.165 "data_offset": 0, 00:09:30.165 "data_size": 65536 00:09:30.165 }, 00:09:30.165 { 00:09:30.165 "name": "BaseBdev3", 00:09:30.165 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:30.165 "is_configured": true, 00:09:30.165 "data_offset": 0, 00:09:30.165 "data_size": 65536 00:09:30.165 } 00:09:30.165 ] 00:09:30.165 }' 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.165 19:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a94fd3e7-1dce-44cd-bccf-45bc874bbc3b 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 [2024-12-12 19:38:13.423843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:30.734 [2024-12-12 19:38:13.423900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.734 [2024-12-12 19:38:13.423909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:30.734 [2024-12-12 19:38:13.424216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:30.734 [2024-12-12 19:38:13.424385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.734 [2024-12-12 19:38:13.424397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:30.734 [2024-12-12 19:38:13.424686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.734 NewBaseBdev 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.734 [ 00:09:30.734 { 00:09:30.734 "name": "NewBaseBdev", 00:09:30.734 "aliases": [ 00:09:30.734 "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b" 00:09:30.734 ], 00:09:30.734 "product_name": "Malloc disk", 00:09:30.734 "block_size": 512, 00:09:30.734 "num_blocks": 65536, 00:09:30.734 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:30.734 "assigned_rate_limits": { 00:09:30.734 "rw_ios_per_sec": 0, 00:09:30.734 "rw_mbytes_per_sec": 0, 00:09:30.734 "r_mbytes_per_sec": 0, 00:09:30.734 "w_mbytes_per_sec": 0 00:09:30.734 }, 00:09:30.734 "claimed": true, 00:09:30.734 "claim_type": "exclusive_write", 00:09:30.734 "zoned": false, 00:09:30.734 "supported_io_types": { 00:09:30.734 "read": true, 00:09:30.734 "write": true, 00:09:30.734 "unmap": true, 00:09:30.734 "flush": true, 00:09:30.734 "reset": true, 00:09:30.734 "nvme_admin": false, 00:09:30.734 "nvme_io": false, 00:09:30.734 "nvme_io_md": false, 00:09:30.734 "write_zeroes": true, 00:09:30.734 "zcopy": true, 00:09:30.734 "get_zone_info": false, 00:09:30.734 "zone_management": false, 00:09:30.734 "zone_append": false, 00:09:30.734 "compare": false, 00:09:30.734 "compare_and_write": false, 00:09:30.734 "abort": true, 00:09:30.734 "seek_hole": false, 00:09:30.734 "seek_data": false, 00:09:30.734 "copy": true, 00:09:30.734 "nvme_iov_md": false 00:09:30.734 }, 00:09:30.734 "memory_domains": [ 00:09:30.734 { 00:09:30.734 "dma_device_id": "system", 00:09:30.734 "dma_device_type": 1 00:09:30.734 }, 00:09:30.734 { 00:09:30.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.734 "dma_device_type": 2 00:09:30.734 } 00:09:30.734 ], 00:09:30.734 "driver_specific": {} 00:09:30.734 } 00:09:30.734 ] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.734 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.735 "name": "Existed_Raid", 00:09:30.735 "uuid": "36843e7a-e1a1-4c30-873c-47bc8d6bebcd", 00:09:30.735 "strip_size_kb": 0, 00:09:30.735 "state": "online", 00:09:30.735 "raid_level": "raid1", 00:09:30.735 "superblock": false, 00:09:30.735 "num_base_bdevs": 3, 00:09:30.735 "num_base_bdevs_discovered": 3, 00:09:30.735 "num_base_bdevs_operational": 3, 00:09:30.735 "base_bdevs_list": [ 00:09:30.735 { 00:09:30.735 "name": "NewBaseBdev", 00:09:30.735 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:30.735 "is_configured": true, 00:09:30.735 "data_offset": 0, 00:09:30.735 "data_size": 65536 00:09:30.735 }, 00:09:30.735 { 00:09:30.735 "name": "BaseBdev2", 00:09:30.735 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:30.735 "is_configured": true, 00:09:30.735 "data_offset": 0, 00:09:30.735 "data_size": 65536 00:09:30.735 }, 00:09:30.735 { 00:09:30.735 "name": "BaseBdev3", 00:09:30.735 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:30.735 "is_configured": true, 00:09:30.735 "data_offset": 0, 00:09:30.735 "data_size": 65536 00:09:30.735 } 00:09:30.735 ] 00:09:30.735 }' 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.735 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.304 [2024-12-12 19:38:13.939353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.304 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.304 "name": "Existed_Raid", 00:09:31.304 "aliases": [ 00:09:31.304 "36843e7a-e1a1-4c30-873c-47bc8d6bebcd" 00:09:31.304 ], 00:09:31.304 "product_name": "Raid Volume", 00:09:31.304 "block_size": 512, 00:09:31.304 "num_blocks": 65536, 00:09:31.304 "uuid": "36843e7a-e1a1-4c30-873c-47bc8d6bebcd", 00:09:31.304 "assigned_rate_limits": { 00:09:31.304 "rw_ios_per_sec": 0, 00:09:31.304 "rw_mbytes_per_sec": 0, 00:09:31.304 "r_mbytes_per_sec": 0, 00:09:31.304 "w_mbytes_per_sec": 0 00:09:31.304 }, 00:09:31.304 "claimed": false, 00:09:31.304 "zoned": false, 00:09:31.304 "supported_io_types": { 00:09:31.304 "read": true, 00:09:31.304 "write": true, 00:09:31.304 "unmap": false, 00:09:31.304 "flush": false, 00:09:31.304 "reset": true, 00:09:31.304 "nvme_admin": false, 00:09:31.304 "nvme_io": false, 00:09:31.304 "nvme_io_md": false, 00:09:31.304 "write_zeroes": true, 00:09:31.304 "zcopy": false, 00:09:31.304 "get_zone_info": false, 00:09:31.304 "zone_management": false, 00:09:31.304 "zone_append": false, 00:09:31.304 "compare": false, 00:09:31.304 "compare_and_write": false, 00:09:31.304 "abort": false, 00:09:31.304 "seek_hole": false, 00:09:31.304 "seek_data": false, 00:09:31.304 "copy": false, 00:09:31.304 "nvme_iov_md": false 00:09:31.304 }, 00:09:31.304 "memory_domains": [ 00:09:31.304 { 00:09:31.304 "dma_device_id": "system", 00:09:31.304 "dma_device_type": 1 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.305 "dma_device_type": 2 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "dma_device_id": "system", 00:09:31.305 "dma_device_type": 1 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.305 "dma_device_type": 2 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "dma_device_id": "system", 00:09:31.305 "dma_device_type": 1 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.305 "dma_device_type": 2 00:09:31.305 } 00:09:31.305 ], 00:09:31.305 "driver_specific": { 00:09:31.305 "raid": { 00:09:31.305 "uuid": "36843e7a-e1a1-4c30-873c-47bc8d6bebcd", 00:09:31.305 "strip_size_kb": 0, 00:09:31.305 "state": "online", 00:09:31.305 "raid_level": "raid1", 00:09:31.305 "superblock": false, 00:09:31.305 "num_base_bdevs": 3, 00:09:31.305 "num_base_bdevs_discovered": 3, 00:09:31.305 "num_base_bdevs_operational": 3, 00:09:31.305 "base_bdevs_list": [ 00:09:31.305 { 00:09:31.305 "name": "NewBaseBdev", 00:09:31.305 "uuid": "a94fd3e7-1dce-44cd-bccf-45bc874bbc3b", 00:09:31.305 "is_configured": true, 00:09:31.305 "data_offset": 0, 00:09:31.305 "data_size": 65536 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "name": "BaseBdev2", 00:09:31.305 "uuid": "80623284-d815-47d1-8f6e-9027614fc543", 00:09:31.305 "is_configured": true, 00:09:31.305 "data_offset": 0, 00:09:31.305 "data_size": 65536 00:09:31.305 }, 00:09:31.305 { 00:09:31.305 "name": "BaseBdev3", 00:09:31.305 "uuid": "3151dc39-fd96-45ad-8c11-36f407920baa", 00:09:31.305 "is_configured": true, 00:09:31.305 "data_offset": 0, 00:09:31.305 "data_size": 65536 00:09:31.305 } 00:09:31.305 ] 00:09:31.305 } 00:09:31.305 } 00:09:31.305 }' 00:09:31.305 19:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.305 BaseBdev2 00:09:31.305 BaseBdev3' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.305 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 [2024-12-12 19:38:14.214593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.565 [2024-12-12 19:38:14.214646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.565 [2024-12-12 19:38:14.214742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.565 [2024-12-12 19:38:14.215058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.565 [2024-12-12 19:38:14.215069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69088 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69088 ']' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69088 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69088 00:09:31.565 killing process with pid 69088 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69088' 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69088 00:09:31.565 [2024-12-12 19:38:14.253979] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.565 19:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69088 00:09:31.825 [2024-12-12 19:38:14.587764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:33.207 00:09:33.207 real 0m10.916s 00:09:33.207 user 0m17.229s 00:09:33.207 sys 0m1.919s 00:09:33.207 ************************************ 00:09:33.207 END TEST raid_state_function_test 00:09:33.207 ************************************ 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.207 19:38:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:33.207 19:38:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:33.207 19:38:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.207 19:38:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.207 ************************************ 00:09:33.207 START TEST raid_state_function_test_sb 00:09:33.207 ************************************ 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69718 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69718' 00:09:33.207 Process raid pid: 69718 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69718 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69718 ']' 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.207 19:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.207 [2024-12-12 19:38:16.001240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:33.207 [2024-12-12 19:38:16.001360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.467 [2024-12-12 19:38:16.177106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.727 [2024-12-12 19:38:16.333361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.727 [2024-12-12 19:38:16.554351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.727 [2024-12-12 19:38:16.554487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 [2024-12-12 19:38:16.837800] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.296 [2024-12-12 19:38:16.837854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.296 [2024-12-12 19:38:16.837865] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.296 [2024-12-12 19:38:16.837874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.296 [2024-12-12 19:38:16.837885] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.296 [2024-12-12 19:38:16.837894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.296 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.296 "name": "Existed_Raid", 00:09:34.296 "uuid": "d491849c-55e7-4f91-b332-940f0acd98db", 00:09:34.296 "strip_size_kb": 0, 00:09:34.297 "state": "configuring", 00:09:34.297 "raid_level": "raid1", 00:09:34.297 "superblock": true, 00:09:34.297 "num_base_bdevs": 3, 00:09:34.297 "num_base_bdevs_discovered": 0, 00:09:34.297 "num_base_bdevs_operational": 3, 00:09:34.297 "base_bdevs_list": [ 00:09:34.297 { 00:09:34.297 "name": "BaseBdev1", 00:09:34.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.297 "is_configured": false, 00:09:34.297 "data_offset": 0, 00:09:34.297 "data_size": 0 00:09:34.297 }, 00:09:34.297 { 00:09:34.297 "name": "BaseBdev2", 00:09:34.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.297 "is_configured": false, 00:09:34.297 "data_offset": 0, 00:09:34.297 "data_size": 0 00:09:34.297 }, 00:09:34.297 { 00:09:34.297 "name": "BaseBdev3", 00:09:34.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.297 "is_configured": false, 00:09:34.297 "data_offset": 0, 00:09:34.297 "data_size": 0 00:09:34.297 } 00:09:34.297 ] 00:09:34.297 }' 00:09:34.297 19:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.297 19:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 [2024-12-12 19:38:17.261026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.556 [2024-12-12 19:38:17.261119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 [2024-12-12 19:38:17.272991] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.556 [2024-12-12 19:38:17.273083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.556 [2024-12-12 19:38:17.273134] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.556 [2024-12-12 19:38:17.273173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.556 [2024-12-12 19:38:17.273200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.556 [2024-12-12 19:38:17.273239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.557 [2024-12-12 19:38:17.319370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.557 BaseBdev1 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.557 [ 00:09:34.557 { 00:09:34.557 "name": "BaseBdev1", 00:09:34.557 "aliases": [ 00:09:34.557 "d22607f7-25fa-4cdb-ab41-005e33079444" 00:09:34.557 ], 00:09:34.557 "product_name": "Malloc disk", 00:09:34.557 "block_size": 512, 00:09:34.557 "num_blocks": 65536, 00:09:34.557 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:34.557 "assigned_rate_limits": { 00:09:34.557 "rw_ios_per_sec": 0, 00:09:34.557 "rw_mbytes_per_sec": 0, 00:09:34.557 "r_mbytes_per_sec": 0, 00:09:34.557 "w_mbytes_per_sec": 0 00:09:34.557 }, 00:09:34.557 "claimed": true, 00:09:34.557 "claim_type": "exclusive_write", 00:09:34.557 "zoned": false, 00:09:34.557 "supported_io_types": { 00:09:34.557 "read": true, 00:09:34.557 "write": true, 00:09:34.557 "unmap": true, 00:09:34.557 "flush": true, 00:09:34.557 "reset": true, 00:09:34.557 "nvme_admin": false, 00:09:34.557 "nvme_io": false, 00:09:34.557 "nvme_io_md": false, 00:09:34.557 "write_zeroes": true, 00:09:34.557 "zcopy": true, 00:09:34.557 "get_zone_info": false, 00:09:34.557 "zone_management": false, 00:09:34.557 "zone_append": false, 00:09:34.557 "compare": false, 00:09:34.557 "compare_and_write": false, 00:09:34.557 "abort": true, 00:09:34.557 "seek_hole": false, 00:09:34.557 "seek_data": false, 00:09:34.557 "copy": true, 00:09:34.557 "nvme_iov_md": false 00:09:34.557 }, 00:09:34.557 "memory_domains": [ 00:09:34.557 { 00:09:34.557 "dma_device_id": "system", 00:09:34.557 "dma_device_type": 1 00:09:34.557 }, 00:09:34.557 { 00:09:34.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.557 "dma_device_type": 2 00:09:34.557 } 00:09:34.557 ], 00:09:34.557 "driver_specific": {} 00:09:34.557 } 00:09:34.557 ] 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.557 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.816 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.816 "name": "Existed_Raid", 00:09:34.816 "uuid": "484548be-e764-4720-be9c-1a8e16c51e11", 00:09:34.816 "strip_size_kb": 0, 00:09:34.816 "state": "configuring", 00:09:34.816 "raid_level": "raid1", 00:09:34.816 "superblock": true, 00:09:34.816 "num_base_bdevs": 3, 00:09:34.816 "num_base_bdevs_discovered": 1, 00:09:34.816 "num_base_bdevs_operational": 3, 00:09:34.816 "base_bdevs_list": [ 00:09:34.816 { 00:09:34.816 "name": "BaseBdev1", 00:09:34.816 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:34.816 "is_configured": true, 00:09:34.816 "data_offset": 2048, 00:09:34.816 "data_size": 63488 00:09:34.816 }, 00:09:34.816 { 00:09:34.816 "name": "BaseBdev2", 00:09:34.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.816 "is_configured": false, 00:09:34.816 "data_offset": 0, 00:09:34.816 "data_size": 0 00:09:34.816 }, 00:09:34.816 { 00:09:34.816 "name": "BaseBdev3", 00:09:34.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.816 "is_configured": false, 00:09:34.816 "data_offset": 0, 00:09:34.816 "data_size": 0 00:09:34.816 } 00:09:34.816 ] 00:09:34.816 }' 00:09:34.816 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.816 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.076 [2024-12-12 19:38:17.838567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.076 [2024-12-12 19:38:17.838620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.076 [2024-12-12 19:38:17.850584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.076 [2024-12-12 19:38:17.852322] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.076 [2024-12-12 19:38:17.852369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.076 [2024-12-12 19:38:17.852379] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.076 [2024-12-12 19:38:17.852387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.076 "name": "Existed_Raid", 00:09:35.076 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:35.076 "strip_size_kb": 0, 00:09:35.076 "state": "configuring", 00:09:35.076 "raid_level": "raid1", 00:09:35.076 "superblock": true, 00:09:35.076 "num_base_bdevs": 3, 00:09:35.076 "num_base_bdevs_discovered": 1, 00:09:35.076 "num_base_bdevs_operational": 3, 00:09:35.076 "base_bdevs_list": [ 00:09:35.076 { 00:09:35.076 "name": "BaseBdev1", 00:09:35.076 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:35.076 "is_configured": true, 00:09:35.076 "data_offset": 2048, 00:09:35.076 "data_size": 63488 00:09:35.076 }, 00:09:35.076 { 00:09:35.076 "name": "BaseBdev2", 00:09:35.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.076 "is_configured": false, 00:09:35.076 "data_offset": 0, 00:09:35.076 "data_size": 0 00:09:35.076 }, 00:09:35.076 { 00:09:35.076 "name": "BaseBdev3", 00:09:35.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.076 "is_configured": false, 00:09:35.076 "data_offset": 0, 00:09:35.076 "data_size": 0 00:09:35.076 } 00:09:35.076 ] 00:09:35.076 }' 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.076 19:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.646 [2024-12-12 19:38:18.356511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.646 BaseBdev2 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.646 [ 00:09:35.646 { 00:09:35.646 "name": "BaseBdev2", 00:09:35.646 "aliases": [ 00:09:35.646 "b4808a25-81c8-4c51-a31b-256676d61b61" 00:09:35.646 ], 00:09:35.646 "product_name": "Malloc disk", 00:09:35.646 "block_size": 512, 00:09:35.646 "num_blocks": 65536, 00:09:35.646 "uuid": "b4808a25-81c8-4c51-a31b-256676d61b61", 00:09:35.646 "assigned_rate_limits": { 00:09:35.646 "rw_ios_per_sec": 0, 00:09:35.646 "rw_mbytes_per_sec": 0, 00:09:35.646 "r_mbytes_per_sec": 0, 00:09:35.646 "w_mbytes_per_sec": 0 00:09:35.646 }, 00:09:35.646 "claimed": true, 00:09:35.646 "claim_type": "exclusive_write", 00:09:35.646 "zoned": false, 00:09:35.646 "supported_io_types": { 00:09:35.646 "read": true, 00:09:35.646 "write": true, 00:09:35.646 "unmap": true, 00:09:35.646 "flush": true, 00:09:35.646 "reset": true, 00:09:35.646 "nvme_admin": false, 00:09:35.646 "nvme_io": false, 00:09:35.646 "nvme_io_md": false, 00:09:35.646 "write_zeroes": true, 00:09:35.646 "zcopy": true, 00:09:35.646 "get_zone_info": false, 00:09:35.646 "zone_management": false, 00:09:35.646 "zone_append": false, 00:09:35.646 "compare": false, 00:09:35.646 "compare_and_write": false, 00:09:35.646 "abort": true, 00:09:35.646 "seek_hole": false, 00:09:35.646 "seek_data": false, 00:09:35.646 "copy": true, 00:09:35.646 "nvme_iov_md": false 00:09:35.646 }, 00:09:35.646 "memory_domains": [ 00:09:35.646 { 00:09:35.646 "dma_device_id": "system", 00:09:35.646 "dma_device_type": 1 00:09:35.646 }, 00:09:35.646 { 00:09:35.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.646 "dma_device_type": 2 00:09:35.646 } 00:09:35.646 ], 00:09:35.646 "driver_specific": {} 00:09:35.646 } 00:09:35.646 ] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.646 "name": "Existed_Raid", 00:09:35.646 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:35.646 "strip_size_kb": 0, 00:09:35.646 "state": "configuring", 00:09:35.646 "raid_level": "raid1", 00:09:35.646 "superblock": true, 00:09:35.646 "num_base_bdevs": 3, 00:09:35.646 "num_base_bdevs_discovered": 2, 00:09:35.646 "num_base_bdevs_operational": 3, 00:09:35.646 "base_bdevs_list": [ 00:09:35.646 { 00:09:35.646 "name": "BaseBdev1", 00:09:35.646 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:35.646 "is_configured": true, 00:09:35.646 "data_offset": 2048, 00:09:35.646 "data_size": 63488 00:09:35.646 }, 00:09:35.646 { 00:09:35.646 "name": "BaseBdev2", 00:09:35.646 "uuid": "b4808a25-81c8-4c51-a31b-256676d61b61", 00:09:35.646 "is_configured": true, 00:09:35.646 "data_offset": 2048, 00:09:35.646 "data_size": 63488 00:09:35.646 }, 00:09:35.646 { 00:09:35.646 "name": "BaseBdev3", 00:09:35.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.646 "is_configured": false, 00:09:35.646 "data_offset": 0, 00:09:35.646 "data_size": 0 00:09:35.646 } 00:09:35.646 ] 00:09:35.646 }' 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.646 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.215 [2024-12-12 19:38:18.884766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.215 [2024-12-12 19:38:18.885001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.215 [2024-12-12 19:38:18.885039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.215 [2024-12-12 19:38:18.885318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:36.215 BaseBdev3 00:09:36.215 [2024-12-12 19:38:18.885478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.215 [2024-12-12 19:38:18.885494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:36.215 [2024-12-12 19:38:18.885721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.215 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 [ 00:09:36.216 { 00:09:36.216 "name": "BaseBdev3", 00:09:36.216 "aliases": [ 00:09:36.216 "9e851ade-3554-4d95-8fd2-9b5d4b56f336" 00:09:36.216 ], 00:09:36.216 "product_name": "Malloc disk", 00:09:36.216 "block_size": 512, 00:09:36.216 "num_blocks": 65536, 00:09:36.216 "uuid": "9e851ade-3554-4d95-8fd2-9b5d4b56f336", 00:09:36.216 "assigned_rate_limits": { 00:09:36.216 "rw_ios_per_sec": 0, 00:09:36.216 "rw_mbytes_per_sec": 0, 00:09:36.216 "r_mbytes_per_sec": 0, 00:09:36.216 "w_mbytes_per_sec": 0 00:09:36.216 }, 00:09:36.216 "claimed": true, 00:09:36.216 "claim_type": "exclusive_write", 00:09:36.216 "zoned": false, 00:09:36.216 "supported_io_types": { 00:09:36.216 "read": true, 00:09:36.216 "write": true, 00:09:36.216 "unmap": true, 00:09:36.216 "flush": true, 00:09:36.216 "reset": true, 00:09:36.216 "nvme_admin": false, 00:09:36.216 "nvme_io": false, 00:09:36.216 "nvme_io_md": false, 00:09:36.216 "write_zeroes": true, 00:09:36.216 "zcopy": true, 00:09:36.216 "get_zone_info": false, 00:09:36.216 "zone_management": false, 00:09:36.216 "zone_append": false, 00:09:36.216 "compare": false, 00:09:36.216 "compare_and_write": false, 00:09:36.216 "abort": true, 00:09:36.216 "seek_hole": false, 00:09:36.216 "seek_data": false, 00:09:36.216 "copy": true, 00:09:36.216 "nvme_iov_md": false 00:09:36.216 }, 00:09:36.216 "memory_domains": [ 00:09:36.216 { 00:09:36.216 "dma_device_id": "system", 00:09:36.216 "dma_device_type": 1 00:09:36.216 }, 00:09:36.216 { 00:09:36.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.216 "dma_device_type": 2 00:09:36.216 } 00:09:36.216 ], 00:09:36.216 "driver_specific": {} 00:09:36.216 } 00:09:36.216 ] 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.216 "name": "Existed_Raid", 00:09:36.216 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:36.216 "strip_size_kb": 0, 00:09:36.216 "state": "online", 00:09:36.216 "raid_level": "raid1", 00:09:36.216 "superblock": true, 00:09:36.216 "num_base_bdevs": 3, 00:09:36.216 "num_base_bdevs_discovered": 3, 00:09:36.216 "num_base_bdevs_operational": 3, 00:09:36.216 "base_bdevs_list": [ 00:09:36.216 { 00:09:36.216 "name": "BaseBdev1", 00:09:36.216 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:36.216 "is_configured": true, 00:09:36.216 "data_offset": 2048, 00:09:36.216 "data_size": 63488 00:09:36.216 }, 00:09:36.216 { 00:09:36.216 "name": "BaseBdev2", 00:09:36.216 "uuid": "b4808a25-81c8-4c51-a31b-256676d61b61", 00:09:36.216 "is_configured": true, 00:09:36.216 "data_offset": 2048, 00:09:36.216 "data_size": 63488 00:09:36.216 }, 00:09:36.216 { 00:09:36.216 "name": "BaseBdev3", 00:09:36.216 "uuid": "9e851ade-3554-4d95-8fd2-9b5d4b56f336", 00:09:36.216 "is_configured": true, 00:09:36.216 "data_offset": 2048, 00:09:36.216 "data_size": 63488 00:09:36.216 } 00:09:36.216 ] 00:09:36.216 }' 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.216 19:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.476 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.476 [2024-12-12 19:38:19.308411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.736 "name": "Existed_Raid", 00:09:36.736 "aliases": [ 00:09:36.736 "8495728b-ea5c-4b8e-924f-a275504d0246" 00:09:36.736 ], 00:09:36.736 "product_name": "Raid Volume", 00:09:36.736 "block_size": 512, 00:09:36.736 "num_blocks": 63488, 00:09:36.736 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:36.736 "assigned_rate_limits": { 00:09:36.736 "rw_ios_per_sec": 0, 00:09:36.736 "rw_mbytes_per_sec": 0, 00:09:36.736 "r_mbytes_per_sec": 0, 00:09:36.736 "w_mbytes_per_sec": 0 00:09:36.736 }, 00:09:36.736 "claimed": false, 00:09:36.736 "zoned": false, 00:09:36.736 "supported_io_types": { 00:09:36.736 "read": true, 00:09:36.736 "write": true, 00:09:36.736 "unmap": false, 00:09:36.736 "flush": false, 00:09:36.736 "reset": true, 00:09:36.736 "nvme_admin": false, 00:09:36.736 "nvme_io": false, 00:09:36.736 "nvme_io_md": false, 00:09:36.736 "write_zeroes": true, 00:09:36.736 "zcopy": false, 00:09:36.736 "get_zone_info": false, 00:09:36.736 "zone_management": false, 00:09:36.736 "zone_append": false, 00:09:36.736 "compare": false, 00:09:36.736 "compare_and_write": false, 00:09:36.736 "abort": false, 00:09:36.736 "seek_hole": false, 00:09:36.736 "seek_data": false, 00:09:36.736 "copy": false, 00:09:36.736 "nvme_iov_md": false 00:09:36.736 }, 00:09:36.736 "memory_domains": [ 00:09:36.736 { 00:09:36.736 "dma_device_id": "system", 00:09:36.736 "dma_device_type": 1 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.736 "dma_device_type": 2 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "dma_device_id": "system", 00:09:36.736 "dma_device_type": 1 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.736 "dma_device_type": 2 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "dma_device_id": "system", 00:09:36.736 "dma_device_type": 1 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.736 "dma_device_type": 2 00:09:36.736 } 00:09:36.736 ], 00:09:36.736 "driver_specific": { 00:09:36.736 "raid": { 00:09:36.736 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:36.736 "strip_size_kb": 0, 00:09:36.736 "state": "online", 00:09:36.736 "raid_level": "raid1", 00:09:36.736 "superblock": true, 00:09:36.736 "num_base_bdevs": 3, 00:09:36.736 "num_base_bdevs_discovered": 3, 00:09:36.736 "num_base_bdevs_operational": 3, 00:09:36.736 "base_bdevs_list": [ 00:09:36.736 { 00:09:36.736 "name": "BaseBdev1", 00:09:36.736 "uuid": "d22607f7-25fa-4cdb-ab41-005e33079444", 00:09:36.736 "is_configured": true, 00:09:36.736 "data_offset": 2048, 00:09:36.736 "data_size": 63488 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "name": "BaseBdev2", 00:09:36.736 "uuid": "b4808a25-81c8-4c51-a31b-256676d61b61", 00:09:36.736 "is_configured": true, 00:09:36.736 "data_offset": 2048, 00:09:36.736 "data_size": 63488 00:09:36.736 }, 00:09:36.736 { 00:09:36.736 "name": "BaseBdev3", 00:09:36.736 "uuid": "9e851ade-3554-4d95-8fd2-9b5d4b56f336", 00:09:36.736 "is_configured": true, 00:09:36.736 "data_offset": 2048, 00:09:36.736 "data_size": 63488 00:09:36.736 } 00:09:36.736 ] 00:09:36.736 } 00:09:36.736 } 00:09:36.736 }' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.736 BaseBdev2 00:09:36.736 BaseBdev3' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.737 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.996 [2024-12-12 19:38:19.579754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.996 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.997 "name": "Existed_Raid", 00:09:36.997 "uuid": "8495728b-ea5c-4b8e-924f-a275504d0246", 00:09:36.997 "strip_size_kb": 0, 00:09:36.997 "state": "online", 00:09:36.997 "raid_level": "raid1", 00:09:36.997 "superblock": true, 00:09:36.997 "num_base_bdevs": 3, 00:09:36.997 "num_base_bdevs_discovered": 2, 00:09:36.997 "num_base_bdevs_operational": 2, 00:09:36.997 "base_bdevs_list": [ 00:09:36.997 { 00:09:36.997 "name": null, 00:09:36.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.997 "is_configured": false, 00:09:36.997 "data_offset": 0, 00:09:36.997 "data_size": 63488 00:09:36.997 }, 00:09:36.997 { 00:09:36.997 "name": "BaseBdev2", 00:09:36.997 "uuid": "b4808a25-81c8-4c51-a31b-256676d61b61", 00:09:36.997 "is_configured": true, 00:09:36.997 "data_offset": 2048, 00:09:36.997 "data_size": 63488 00:09:36.997 }, 00:09:36.997 { 00:09:36.997 "name": "BaseBdev3", 00:09:36.997 "uuid": "9e851ade-3554-4d95-8fd2-9b5d4b56f336", 00:09:36.997 "is_configured": true, 00:09:36.997 "data_offset": 2048, 00:09:36.997 "data_size": 63488 00:09:36.997 } 00:09:36.997 ] 00:09:36.997 }' 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.997 19:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.566 [2024-12-12 19:38:20.196179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.566 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.566 [2024-12-12 19:38:20.351902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.566 [2024-12-12 19:38:20.352095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.827 [2024-12-12 19:38:20.448808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.827 [2024-12-12 19:38:20.448942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.827 [2024-12-12 19:38:20.448984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 BaseBdev2 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 [ 00:09:37.827 { 00:09:37.827 "name": "BaseBdev2", 00:09:37.827 "aliases": [ 00:09:37.827 "64cf8717-0971-4ec3-a4a3-ef353d8b23e9" 00:09:37.827 ], 00:09:37.827 "product_name": "Malloc disk", 00:09:37.827 "block_size": 512, 00:09:37.827 "num_blocks": 65536, 00:09:37.827 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:37.827 "assigned_rate_limits": { 00:09:37.827 "rw_ios_per_sec": 0, 00:09:37.827 "rw_mbytes_per_sec": 0, 00:09:37.827 "r_mbytes_per_sec": 0, 00:09:37.827 "w_mbytes_per_sec": 0 00:09:37.827 }, 00:09:37.827 "claimed": false, 00:09:37.827 "zoned": false, 00:09:37.827 "supported_io_types": { 00:09:37.827 "read": true, 00:09:37.827 "write": true, 00:09:37.827 "unmap": true, 00:09:37.827 "flush": true, 00:09:37.827 "reset": true, 00:09:37.827 "nvme_admin": false, 00:09:37.827 "nvme_io": false, 00:09:37.827 "nvme_io_md": false, 00:09:37.827 "write_zeroes": true, 00:09:37.827 "zcopy": true, 00:09:37.827 "get_zone_info": false, 00:09:37.827 "zone_management": false, 00:09:37.827 "zone_append": false, 00:09:37.827 "compare": false, 00:09:37.827 "compare_and_write": false, 00:09:37.827 "abort": true, 00:09:37.827 "seek_hole": false, 00:09:37.827 "seek_data": false, 00:09:37.827 "copy": true, 00:09:37.827 "nvme_iov_md": false 00:09:37.827 }, 00:09:37.827 "memory_domains": [ 00:09:37.827 { 00:09:37.827 "dma_device_id": "system", 00:09:37.827 "dma_device_type": 1 00:09:37.827 }, 00:09:37.827 { 00:09:37.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.827 "dma_device_type": 2 00:09:37.827 } 00:09:37.827 ], 00:09:37.827 "driver_specific": {} 00:09:37.827 } 00:09:37.827 ] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 BaseBdev3 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.827 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.827 [ 00:09:37.827 { 00:09:37.827 "name": "BaseBdev3", 00:09:37.827 "aliases": [ 00:09:37.827 "9cbb9aaa-aece-4ce5-842a-aad3db61aca8" 00:09:37.827 ], 00:09:37.827 "product_name": "Malloc disk", 00:09:37.827 "block_size": 512, 00:09:37.827 "num_blocks": 65536, 00:09:37.827 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:37.827 "assigned_rate_limits": { 00:09:37.827 "rw_ios_per_sec": 0, 00:09:37.827 "rw_mbytes_per_sec": 0, 00:09:37.827 "r_mbytes_per_sec": 0, 00:09:37.827 "w_mbytes_per_sec": 0 00:09:37.827 }, 00:09:37.827 "claimed": false, 00:09:37.827 "zoned": false, 00:09:37.827 "supported_io_types": { 00:09:37.827 "read": true, 00:09:37.827 "write": true, 00:09:37.827 "unmap": true, 00:09:37.827 "flush": true, 00:09:37.827 "reset": true, 00:09:37.827 "nvme_admin": false, 00:09:37.827 "nvme_io": false, 00:09:37.827 "nvme_io_md": false, 00:09:37.827 "write_zeroes": true, 00:09:37.827 "zcopy": true, 00:09:37.827 "get_zone_info": false, 00:09:37.827 "zone_management": false, 00:09:37.827 "zone_append": false, 00:09:37.827 "compare": false, 00:09:37.827 "compare_and_write": false, 00:09:37.827 "abort": true, 00:09:37.827 "seek_hole": false, 00:09:37.827 "seek_data": false, 00:09:37.827 "copy": true, 00:09:37.827 "nvme_iov_md": false 00:09:37.827 }, 00:09:37.827 "memory_domains": [ 00:09:37.827 { 00:09:37.827 "dma_device_id": "system", 00:09:37.828 "dma_device_type": 1 00:09:37.828 }, 00:09:37.828 { 00:09:37.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.828 "dma_device_type": 2 00:09:37.828 } 00:09:37.828 ], 00:09:37.828 "driver_specific": {} 00:09:37.828 } 00:09:37.828 ] 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.828 [2024-12-12 19:38:20.663339] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.828 [2024-12-12 19:38:20.663466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.828 [2024-12-12 19:38:20.663508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.828 [2024-12-12 19:38:20.665448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.828 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.088 "name": "Existed_Raid", 00:09:38.088 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:38.088 "strip_size_kb": 0, 00:09:38.088 "state": "configuring", 00:09:38.088 "raid_level": "raid1", 00:09:38.088 "superblock": true, 00:09:38.088 "num_base_bdevs": 3, 00:09:38.088 "num_base_bdevs_discovered": 2, 00:09:38.088 "num_base_bdevs_operational": 3, 00:09:38.088 "base_bdevs_list": [ 00:09:38.088 { 00:09:38.088 "name": "BaseBdev1", 00:09:38.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.088 "is_configured": false, 00:09:38.088 "data_offset": 0, 00:09:38.088 "data_size": 0 00:09:38.088 }, 00:09:38.088 { 00:09:38.088 "name": "BaseBdev2", 00:09:38.088 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:38.088 "is_configured": true, 00:09:38.088 "data_offset": 2048, 00:09:38.088 "data_size": 63488 00:09:38.088 }, 00:09:38.088 { 00:09:38.088 "name": "BaseBdev3", 00:09:38.088 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:38.088 "is_configured": true, 00:09:38.088 "data_offset": 2048, 00:09:38.088 "data_size": 63488 00:09:38.088 } 00:09:38.088 ] 00:09:38.088 }' 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.088 19:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.348 [2024-12-12 19:38:21.138517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.348 "name": "Existed_Raid", 00:09:38.348 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:38.348 "strip_size_kb": 0, 00:09:38.348 "state": "configuring", 00:09:38.348 "raid_level": "raid1", 00:09:38.348 "superblock": true, 00:09:38.348 "num_base_bdevs": 3, 00:09:38.348 "num_base_bdevs_discovered": 1, 00:09:38.348 "num_base_bdevs_operational": 3, 00:09:38.348 "base_bdevs_list": [ 00:09:38.348 { 00:09:38.348 "name": "BaseBdev1", 00:09:38.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.348 "is_configured": false, 00:09:38.348 "data_offset": 0, 00:09:38.348 "data_size": 0 00:09:38.348 }, 00:09:38.348 { 00:09:38.348 "name": null, 00:09:38.348 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:38.348 "is_configured": false, 00:09:38.348 "data_offset": 0, 00:09:38.348 "data_size": 63488 00:09:38.348 }, 00:09:38.348 { 00:09:38.348 "name": "BaseBdev3", 00:09:38.348 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:38.348 "is_configured": true, 00:09:38.348 "data_offset": 2048, 00:09:38.348 "data_size": 63488 00:09:38.348 } 00:09:38.348 ] 00:09:38.348 }' 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.348 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.917 [2024-12-12 19:38:21.614486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.917 BaseBdev1 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.917 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.917 [ 00:09:38.917 { 00:09:38.917 "name": "BaseBdev1", 00:09:38.917 "aliases": [ 00:09:38.917 "2380cff0-9582-46a8-bfde-2553407b35b2" 00:09:38.917 ], 00:09:38.917 "product_name": "Malloc disk", 00:09:38.917 "block_size": 512, 00:09:38.917 "num_blocks": 65536, 00:09:38.917 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:38.917 "assigned_rate_limits": { 00:09:38.917 "rw_ios_per_sec": 0, 00:09:38.917 "rw_mbytes_per_sec": 0, 00:09:38.917 "r_mbytes_per_sec": 0, 00:09:38.917 "w_mbytes_per_sec": 0 00:09:38.917 }, 00:09:38.917 "claimed": true, 00:09:38.917 "claim_type": "exclusive_write", 00:09:38.917 "zoned": false, 00:09:38.917 "supported_io_types": { 00:09:38.917 "read": true, 00:09:38.917 "write": true, 00:09:38.917 "unmap": true, 00:09:38.917 "flush": true, 00:09:38.917 "reset": true, 00:09:38.917 "nvme_admin": false, 00:09:38.917 "nvme_io": false, 00:09:38.917 "nvme_io_md": false, 00:09:38.917 "write_zeroes": true, 00:09:38.917 "zcopy": true, 00:09:38.917 "get_zone_info": false, 00:09:38.918 "zone_management": false, 00:09:38.918 "zone_append": false, 00:09:38.918 "compare": false, 00:09:38.918 "compare_and_write": false, 00:09:38.918 "abort": true, 00:09:38.918 "seek_hole": false, 00:09:38.918 "seek_data": false, 00:09:38.918 "copy": true, 00:09:38.918 "nvme_iov_md": false 00:09:38.918 }, 00:09:38.918 "memory_domains": [ 00:09:38.918 { 00:09:38.918 "dma_device_id": "system", 00:09:38.918 "dma_device_type": 1 00:09:38.918 }, 00:09:38.918 { 00:09:38.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.918 "dma_device_type": 2 00:09:38.918 } 00:09:38.918 ], 00:09:38.918 "driver_specific": {} 00:09:38.918 } 00:09:38.918 ] 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.918 "name": "Existed_Raid", 00:09:38.918 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:38.918 "strip_size_kb": 0, 00:09:38.918 "state": "configuring", 00:09:38.918 "raid_level": "raid1", 00:09:38.918 "superblock": true, 00:09:38.918 "num_base_bdevs": 3, 00:09:38.918 "num_base_bdevs_discovered": 2, 00:09:38.918 "num_base_bdevs_operational": 3, 00:09:38.918 "base_bdevs_list": [ 00:09:38.918 { 00:09:38.918 "name": "BaseBdev1", 00:09:38.918 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:38.918 "is_configured": true, 00:09:38.918 "data_offset": 2048, 00:09:38.918 "data_size": 63488 00:09:38.918 }, 00:09:38.918 { 00:09:38.918 "name": null, 00:09:38.918 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:38.918 "is_configured": false, 00:09:38.918 "data_offset": 0, 00:09:38.918 "data_size": 63488 00:09:38.918 }, 00:09:38.918 { 00:09:38.918 "name": "BaseBdev3", 00:09:38.918 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:38.918 "is_configured": true, 00:09:38.918 "data_offset": 2048, 00:09:38.918 "data_size": 63488 00:09:38.918 } 00:09:38.918 ] 00:09:38.918 }' 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.918 19:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.487 [2024-12-12 19:38:22.157624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.487 "name": "Existed_Raid", 00:09:39.487 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:39.487 "strip_size_kb": 0, 00:09:39.487 "state": "configuring", 00:09:39.487 "raid_level": "raid1", 00:09:39.487 "superblock": true, 00:09:39.487 "num_base_bdevs": 3, 00:09:39.487 "num_base_bdevs_discovered": 1, 00:09:39.487 "num_base_bdevs_operational": 3, 00:09:39.487 "base_bdevs_list": [ 00:09:39.487 { 00:09:39.487 "name": "BaseBdev1", 00:09:39.487 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:39.487 "is_configured": true, 00:09:39.487 "data_offset": 2048, 00:09:39.487 "data_size": 63488 00:09:39.487 }, 00:09:39.487 { 00:09:39.487 "name": null, 00:09:39.487 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:39.487 "is_configured": false, 00:09:39.487 "data_offset": 0, 00:09:39.487 "data_size": 63488 00:09:39.487 }, 00:09:39.487 { 00:09:39.487 "name": null, 00:09:39.487 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:39.487 "is_configured": false, 00:09:39.487 "data_offset": 0, 00:09:39.487 "data_size": 63488 00:09:39.487 } 00:09:39.487 ] 00:09:39.487 }' 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.487 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.747 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.747 [2024-12-12 19:38:22.588890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.007 "name": "Existed_Raid", 00:09:40.007 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:40.007 "strip_size_kb": 0, 00:09:40.007 "state": "configuring", 00:09:40.007 "raid_level": "raid1", 00:09:40.007 "superblock": true, 00:09:40.007 "num_base_bdevs": 3, 00:09:40.007 "num_base_bdevs_discovered": 2, 00:09:40.007 "num_base_bdevs_operational": 3, 00:09:40.007 "base_bdevs_list": [ 00:09:40.007 { 00:09:40.007 "name": "BaseBdev1", 00:09:40.007 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:40.007 "is_configured": true, 00:09:40.007 "data_offset": 2048, 00:09:40.007 "data_size": 63488 00:09:40.007 }, 00:09:40.007 { 00:09:40.007 "name": null, 00:09:40.007 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:40.007 "is_configured": false, 00:09:40.007 "data_offset": 0, 00:09:40.007 "data_size": 63488 00:09:40.007 }, 00:09:40.007 { 00:09:40.007 "name": "BaseBdev3", 00:09:40.007 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:40.007 "is_configured": true, 00:09:40.007 "data_offset": 2048, 00:09:40.007 "data_size": 63488 00:09:40.007 } 00:09:40.007 ] 00:09:40.007 }' 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.007 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.267 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.267 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 19:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.267 19:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.267 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.267 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.267 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.267 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 [2024-12-12 19:38:23.024212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.526 "name": "Existed_Raid", 00:09:40.526 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:40.526 "strip_size_kb": 0, 00:09:40.526 "state": "configuring", 00:09:40.526 "raid_level": "raid1", 00:09:40.526 "superblock": true, 00:09:40.526 "num_base_bdevs": 3, 00:09:40.526 "num_base_bdevs_discovered": 1, 00:09:40.526 "num_base_bdevs_operational": 3, 00:09:40.526 "base_bdevs_list": [ 00:09:40.526 { 00:09:40.526 "name": null, 00:09:40.526 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:40.526 "is_configured": false, 00:09:40.526 "data_offset": 0, 00:09:40.526 "data_size": 63488 00:09:40.526 }, 00:09:40.526 { 00:09:40.526 "name": null, 00:09:40.526 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:40.526 "is_configured": false, 00:09:40.526 "data_offset": 0, 00:09:40.526 "data_size": 63488 00:09:40.526 }, 00:09:40.526 { 00:09:40.526 "name": "BaseBdev3", 00:09:40.526 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:40.526 "is_configured": true, 00:09:40.526 "data_offset": 2048, 00:09:40.526 "data_size": 63488 00:09:40.526 } 00:09:40.526 ] 00:09:40.526 }' 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.526 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 [2024-12-12 19:38:23.567568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.786 "name": "Existed_Raid", 00:09:40.786 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:40.786 "strip_size_kb": 0, 00:09:40.786 "state": "configuring", 00:09:40.786 "raid_level": "raid1", 00:09:40.786 "superblock": true, 00:09:40.786 "num_base_bdevs": 3, 00:09:40.786 "num_base_bdevs_discovered": 2, 00:09:40.786 "num_base_bdevs_operational": 3, 00:09:40.786 "base_bdevs_list": [ 00:09:40.786 { 00:09:40.786 "name": null, 00:09:40.786 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:40.786 "is_configured": false, 00:09:40.786 "data_offset": 0, 00:09:40.786 "data_size": 63488 00:09:40.786 }, 00:09:40.786 { 00:09:40.786 "name": "BaseBdev2", 00:09:40.786 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:40.786 "is_configured": true, 00:09:40.786 "data_offset": 2048, 00:09:40.786 "data_size": 63488 00:09:40.786 }, 00:09:40.786 { 00:09:40.786 "name": "BaseBdev3", 00:09:40.786 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:40.786 "is_configured": true, 00:09:40.786 "data_offset": 2048, 00:09:40.786 "data_size": 63488 00:09:40.786 } 00:09:40.786 ] 00:09:40.786 }' 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.786 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.356 19:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.356 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2380cff0-9582-46a8-bfde-2553407b35b2 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 [2024-12-12 19:38:24.108270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.356 [2024-12-12 19:38:24.108512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.356 [2024-12-12 19:38:24.108527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.356 [2024-12-12 19:38:24.108869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:41.356 NewBaseBdev 00:09:41.356 [2024-12-12 19:38:24.109075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.356 [2024-12-12 19:38:24.109098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:41.356 [2024-12-12 19:38:24.109243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 [ 00:09:41.356 { 00:09:41.356 "name": "NewBaseBdev", 00:09:41.356 "aliases": [ 00:09:41.356 "2380cff0-9582-46a8-bfde-2553407b35b2" 00:09:41.356 ], 00:09:41.356 "product_name": "Malloc disk", 00:09:41.356 "block_size": 512, 00:09:41.356 "num_blocks": 65536, 00:09:41.356 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:41.356 "assigned_rate_limits": { 00:09:41.356 "rw_ios_per_sec": 0, 00:09:41.356 "rw_mbytes_per_sec": 0, 00:09:41.356 "r_mbytes_per_sec": 0, 00:09:41.356 "w_mbytes_per_sec": 0 00:09:41.356 }, 00:09:41.356 "claimed": true, 00:09:41.356 "claim_type": "exclusive_write", 00:09:41.356 "zoned": false, 00:09:41.356 "supported_io_types": { 00:09:41.356 "read": true, 00:09:41.356 "write": true, 00:09:41.356 "unmap": true, 00:09:41.356 "flush": true, 00:09:41.356 "reset": true, 00:09:41.356 "nvme_admin": false, 00:09:41.356 "nvme_io": false, 00:09:41.356 "nvme_io_md": false, 00:09:41.356 "write_zeroes": true, 00:09:41.356 "zcopy": true, 00:09:41.356 "get_zone_info": false, 00:09:41.356 "zone_management": false, 00:09:41.356 "zone_append": false, 00:09:41.356 "compare": false, 00:09:41.356 "compare_and_write": false, 00:09:41.356 "abort": true, 00:09:41.356 "seek_hole": false, 00:09:41.356 "seek_data": false, 00:09:41.356 "copy": true, 00:09:41.356 "nvme_iov_md": false 00:09:41.356 }, 00:09:41.356 "memory_domains": [ 00:09:41.356 { 00:09:41.356 "dma_device_id": "system", 00:09:41.356 "dma_device_type": 1 00:09:41.356 }, 00:09:41.356 { 00:09:41.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.356 "dma_device_type": 2 00:09:41.356 } 00:09:41.356 ], 00:09:41.356 "driver_specific": {} 00:09:41.356 } 00:09:41.356 ] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.356 "name": "Existed_Raid", 00:09:41.356 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:41.356 "strip_size_kb": 0, 00:09:41.356 "state": "online", 00:09:41.356 "raid_level": "raid1", 00:09:41.356 "superblock": true, 00:09:41.356 "num_base_bdevs": 3, 00:09:41.356 "num_base_bdevs_discovered": 3, 00:09:41.356 "num_base_bdevs_operational": 3, 00:09:41.356 "base_bdevs_list": [ 00:09:41.356 { 00:09:41.356 "name": "NewBaseBdev", 00:09:41.356 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:41.356 "is_configured": true, 00:09:41.356 "data_offset": 2048, 00:09:41.356 "data_size": 63488 00:09:41.356 }, 00:09:41.356 { 00:09:41.356 "name": "BaseBdev2", 00:09:41.356 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:41.356 "is_configured": true, 00:09:41.356 "data_offset": 2048, 00:09:41.356 "data_size": 63488 00:09:41.356 }, 00:09:41.356 { 00:09:41.356 "name": "BaseBdev3", 00:09:41.356 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:41.356 "is_configured": true, 00:09:41.356 "data_offset": 2048, 00:09:41.356 "data_size": 63488 00:09:41.356 } 00:09:41.356 ] 00:09:41.356 }' 00:09:41.356 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.616 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 [2024-12-12 19:38:24.595758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.876 "name": "Existed_Raid", 00:09:41.876 "aliases": [ 00:09:41.876 "700d436b-184c-4c16-9127-70ca3cd8d22d" 00:09:41.876 ], 00:09:41.876 "product_name": "Raid Volume", 00:09:41.876 "block_size": 512, 00:09:41.876 "num_blocks": 63488, 00:09:41.876 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:41.876 "assigned_rate_limits": { 00:09:41.876 "rw_ios_per_sec": 0, 00:09:41.876 "rw_mbytes_per_sec": 0, 00:09:41.876 "r_mbytes_per_sec": 0, 00:09:41.876 "w_mbytes_per_sec": 0 00:09:41.876 }, 00:09:41.876 "claimed": false, 00:09:41.876 "zoned": false, 00:09:41.876 "supported_io_types": { 00:09:41.876 "read": true, 00:09:41.876 "write": true, 00:09:41.876 "unmap": false, 00:09:41.876 "flush": false, 00:09:41.876 "reset": true, 00:09:41.876 "nvme_admin": false, 00:09:41.876 "nvme_io": false, 00:09:41.876 "nvme_io_md": false, 00:09:41.876 "write_zeroes": true, 00:09:41.876 "zcopy": false, 00:09:41.876 "get_zone_info": false, 00:09:41.876 "zone_management": false, 00:09:41.876 "zone_append": false, 00:09:41.876 "compare": false, 00:09:41.876 "compare_and_write": false, 00:09:41.876 "abort": false, 00:09:41.876 "seek_hole": false, 00:09:41.876 "seek_data": false, 00:09:41.876 "copy": false, 00:09:41.876 "nvme_iov_md": false 00:09:41.876 }, 00:09:41.876 "memory_domains": [ 00:09:41.876 { 00:09:41.876 "dma_device_id": "system", 00:09:41.876 "dma_device_type": 1 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.876 "dma_device_type": 2 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "dma_device_id": "system", 00:09:41.876 "dma_device_type": 1 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.876 "dma_device_type": 2 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "dma_device_id": "system", 00:09:41.876 "dma_device_type": 1 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.876 "dma_device_type": 2 00:09:41.876 } 00:09:41.876 ], 00:09:41.876 "driver_specific": { 00:09:41.876 "raid": { 00:09:41.876 "uuid": "700d436b-184c-4c16-9127-70ca3cd8d22d", 00:09:41.876 "strip_size_kb": 0, 00:09:41.876 "state": "online", 00:09:41.876 "raid_level": "raid1", 00:09:41.876 "superblock": true, 00:09:41.876 "num_base_bdevs": 3, 00:09:41.876 "num_base_bdevs_discovered": 3, 00:09:41.876 "num_base_bdevs_operational": 3, 00:09:41.876 "base_bdevs_list": [ 00:09:41.876 { 00:09:41.876 "name": "NewBaseBdev", 00:09:41.876 "uuid": "2380cff0-9582-46a8-bfde-2553407b35b2", 00:09:41.876 "is_configured": true, 00:09:41.876 "data_offset": 2048, 00:09:41.876 "data_size": 63488 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "name": "BaseBdev2", 00:09:41.876 "uuid": "64cf8717-0971-4ec3-a4a3-ef353d8b23e9", 00:09:41.876 "is_configured": true, 00:09:41.876 "data_offset": 2048, 00:09:41.876 "data_size": 63488 00:09:41.876 }, 00:09:41.876 { 00:09:41.876 "name": "BaseBdev3", 00:09:41.876 "uuid": "9cbb9aaa-aece-4ce5-842a-aad3db61aca8", 00:09:41.876 "is_configured": true, 00:09:41.876 "data_offset": 2048, 00:09:41.876 "data_size": 63488 00:09:41.876 } 00:09:41.876 ] 00:09:41.876 } 00:09:41.876 } 00:09:41.876 }' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.876 BaseBdev2 00:09:41.876 BaseBdev3' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.876 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.136 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 [2024-12-12 19:38:24.875009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.136 [2024-12-12 19:38:24.875041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.136 [2024-12-12 19:38:24.875107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.137 [2024-12-12 19:38:24.875407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.137 [2024-12-12 19:38:24.875416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69718 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69718 ']' 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69718 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69718 00:09:42.137 killing process with pid 69718 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69718' 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69718 00:09:42.137 [2024-12-12 19:38:24.922387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.137 19:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69718 00:09:42.705 [2024-12-12 19:38:25.251123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.084 19:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.084 00:09:44.084 real 0m10.591s 00:09:44.084 user 0m16.719s 00:09:44.084 sys 0m1.835s 00:09:44.084 19:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.084 19:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.084 ************************************ 00:09:44.084 END TEST raid_state_function_test_sb 00:09:44.084 ************************************ 00:09:44.084 19:38:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:44.084 19:38:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.084 19:38:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.084 19:38:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.084 ************************************ 00:09:44.084 START TEST raid_superblock_test 00:09:44.084 ************************************ 00:09:44.084 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70338 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70338 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70338 ']' 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.085 19:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.085 [2024-12-12 19:38:26.669937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:44.085 [2024-12-12 19:38:26.670074] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70338 ] 00:09:44.085 [2024-12-12 19:38:26.850809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.344 [2024-12-12 19:38:26.994196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.604 [2024-12-12 19:38:27.231648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.604 [2024-12-12 19:38:27.231733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.863 malloc1 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.863 [2024-12-12 19:38:27.573017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.863 [2024-12-12 19:38:27.573146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.863 [2024-12-12 19:38:27.573176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.863 [2024-12-12 19:38:27.573186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.863 [2024-12-12 19:38:27.575624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.863 [2024-12-12 19:38:27.575662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.863 pt1 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.863 malloc2 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.863 [2024-12-12 19:38:27.633451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.863 [2024-12-12 19:38:27.633619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.863 [2024-12-12 19:38:27.633666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:44.863 [2024-12-12 19:38:27.633706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.863 [2024-12-12 19:38:27.636128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.863 [2024-12-12 19:38:27.636202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.863 pt2 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:44.863 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.864 malloc3 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.864 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.123 [2024-12-12 19:38:27.707334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.123 [2024-12-12 19:38:27.707442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.123 [2024-12-12 19:38:27.707481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:45.123 [2024-12-12 19:38:27.707514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.123 [2024-12-12 19:38:27.709899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.123 [2024-12-12 19:38:27.709972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.123 pt3 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.123 [2024-12-12 19:38:27.719356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.123 [2024-12-12 19:38:27.721429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.123 [2024-12-12 19:38:27.721497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.123 [2024-12-12 19:38:27.721677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:45.123 [2024-12-12 19:38:27.721696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.123 [2024-12-12 19:38:27.721931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.123 [2024-12-12 19:38:27.722142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:45.123 [2024-12-12 19:38:27.722156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:45.123 [2024-12-12 19:38:27.722307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.123 "name": "raid_bdev1", 00:09:45.123 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:45.123 "strip_size_kb": 0, 00:09:45.123 "state": "online", 00:09:45.123 "raid_level": "raid1", 00:09:45.123 "superblock": true, 00:09:45.123 "num_base_bdevs": 3, 00:09:45.123 "num_base_bdevs_discovered": 3, 00:09:45.123 "num_base_bdevs_operational": 3, 00:09:45.123 "base_bdevs_list": [ 00:09:45.123 { 00:09:45.123 "name": "pt1", 00:09:45.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.123 "is_configured": true, 00:09:45.123 "data_offset": 2048, 00:09:45.123 "data_size": 63488 00:09:45.123 }, 00:09:45.123 { 00:09:45.123 "name": "pt2", 00:09:45.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.123 "is_configured": true, 00:09:45.123 "data_offset": 2048, 00:09:45.123 "data_size": 63488 00:09:45.123 }, 00:09:45.123 { 00:09:45.123 "name": "pt3", 00:09:45.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.123 "is_configured": true, 00:09:45.123 "data_offset": 2048, 00:09:45.123 "data_size": 63488 00:09:45.123 } 00:09:45.123 ] 00:09:45.123 }' 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.123 19:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.383 [2024-12-12 19:38:28.162979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.383 "name": "raid_bdev1", 00:09:45.383 "aliases": [ 00:09:45.383 "84cd1df6-0df9-4df9-9745-08fcface453f" 00:09:45.383 ], 00:09:45.383 "product_name": "Raid Volume", 00:09:45.383 "block_size": 512, 00:09:45.383 "num_blocks": 63488, 00:09:45.383 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:45.383 "assigned_rate_limits": { 00:09:45.383 "rw_ios_per_sec": 0, 00:09:45.383 "rw_mbytes_per_sec": 0, 00:09:45.383 "r_mbytes_per_sec": 0, 00:09:45.383 "w_mbytes_per_sec": 0 00:09:45.383 }, 00:09:45.383 "claimed": false, 00:09:45.383 "zoned": false, 00:09:45.383 "supported_io_types": { 00:09:45.383 "read": true, 00:09:45.383 "write": true, 00:09:45.383 "unmap": false, 00:09:45.383 "flush": false, 00:09:45.383 "reset": true, 00:09:45.383 "nvme_admin": false, 00:09:45.383 "nvme_io": false, 00:09:45.383 "nvme_io_md": false, 00:09:45.383 "write_zeroes": true, 00:09:45.383 "zcopy": false, 00:09:45.383 "get_zone_info": false, 00:09:45.383 "zone_management": false, 00:09:45.383 "zone_append": false, 00:09:45.383 "compare": false, 00:09:45.383 "compare_and_write": false, 00:09:45.383 "abort": false, 00:09:45.383 "seek_hole": false, 00:09:45.383 "seek_data": false, 00:09:45.383 "copy": false, 00:09:45.383 "nvme_iov_md": false 00:09:45.383 }, 00:09:45.383 "memory_domains": [ 00:09:45.383 { 00:09:45.383 "dma_device_id": "system", 00:09:45.383 "dma_device_type": 1 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.383 "dma_device_type": 2 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "dma_device_id": "system", 00:09:45.383 "dma_device_type": 1 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.383 "dma_device_type": 2 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "dma_device_id": "system", 00:09:45.383 "dma_device_type": 1 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.383 "dma_device_type": 2 00:09:45.383 } 00:09:45.383 ], 00:09:45.383 "driver_specific": { 00:09:45.383 "raid": { 00:09:45.383 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:45.383 "strip_size_kb": 0, 00:09:45.383 "state": "online", 00:09:45.383 "raid_level": "raid1", 00:09:45.383 "superblock": true, 00:09:45.383 "num_base_bdevs": 3, 00:09:45.383 "num_base_bdevs_discovered": 3, 00:09:45.383 "num_base_bdevs_operational": 3, 00:09:45.383 "base_bdevs_list": [ 00:09:45.383 { 00:09:45.383 "name": "pt1", 00:09:45.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.383 "is_configured": true, 00:09:45.383 "data_offset": 2048, 00:09:45.383 "data_size": 63488 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "name": "pt2", 00:09:45.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.383 "is_configured": true, 00:09:45.383 "data_offset": 2048, 00:09:45.383 "data_size": 63488 00:09:45.383 }, 00:09:45.383 { 00:09:45.383 "name": "pt3", 00:09:45.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.383 "is_configured": true, 00:09:45.383 "data_offset": 2048, 00:09:45.383 "data_size": 63488 00:09:45.383 } 00:09:45.383 ] 00:09:45.383 } 00:09:45.383 } 00:09:45.383 }' 00:09:45.383 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.643 pt2 00:09:45.643 pt3' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.643 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.643 [2024-12-12 19:38:28.466305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=84cd1df6-0df9-4df9-9745-08fcface453f 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 84cd1df6-0df9-4df9-9745-08fcface453f ']' 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 [2024-12-12 19:38:28.513974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.903 [2024-12-12 19:38:28.514020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.903 [2024-12-12 19:38:28.514144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.903 [2024-12-12 19:38:28.514255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.903 [2024-12-12 19:38:28.514270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.903 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 [2024-12-12 19:38:28.661849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.904 [2024-12-12 19:38:28.664178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.904 [2024-12-12 19:38:28.664303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:45.904 [2024-12-12 19:38:28.664374] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.904 [2024-12-12 19:38:28.664437] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.904 [2024-12-12 19:38:28.664455] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:45.904 [2024-12-12 19:38:28.664472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.904 [2024-12-12 19:38:28.664482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:45.904 request: 00:09:45.904 { 00:09:45.904 "name": "raid_bdev1", 00:09:45.904 "raid_level": "raid1", 00:09:45.904 "base_bdevs": [ 00:09:45.904 "malloc1", 00:09:45.904 "malloc2", 00:09:45.904 "malloc3" 00:09:45.904 ], 00:09:45.904 "superblock": false, 00:09:45.904 "method": "bdev_raid_create", 00:09:45.904 "req_id": 1 00:09:45.904 } 00:09:45.904 Got JSON-RPC error response 00:09:45.904 response: 00:09:45.904 { 00:09:45.904 "code": -17, 00:09:45.904 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.904 } 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 [2024-12-12 19:38:28.725678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.904 [2024-12-12 19:38:28.725830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.904 [2024-12-12 19:38:28.725870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.904 [2024-12-12 19:38:28.725909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.904 [2024-12-12 19:38:28.728481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.904 [2024-12-12 19:38:28.728565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.904 [2024-12-12 19:38:28.728702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.904 [2024-12-12 19:38:28.728801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.904 pt1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.904 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.164 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.164 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.164 "name": "raid_bdev1", 00:09:46.164 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:46.164 "strip_size_kb": 0, 00:09:46.164 "state": "configuring", 00:09:46.164 "raid_level": "raid1", 00:09:46.164 "superblock": true, 00:09:46.164 "num_base_bdevs": 3, 00:09:46.164 "num_base_bdevs_discovered": 1, 00:09:46.164 "num_base_bdevs_operational": 3, 00:09:46.164 "base_bdevs_list": [ 00:09:46.164 { 00:09:46.164 "name": "pt1", 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.164 "is_configured": true, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": null, 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.164 "is_configured": false, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 }, 00:09:46.164 { 00:09:46.164 "name": null, 00:09:46.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.164 "is_configured": false, 00:09:46.164 "data_offset": 2048, 00:09:46.164 "data_size": 63488 00:09:46.164 } 00:09:46.164 ] 00:09:46.164 }' 00:09:46.164 19:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.164 19:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.424 [2024-12-12 19:38:29.160916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.424 [2024-12-12 19:38:29.161022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.424 [2024-12-12 19:38:29.161049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:46.424 [2024-12-12 19:38:29.161059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.424 [2024-12-12 19:38:29.161669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.424 [2024-12-12 19:38:29.161692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.424 [2024-12-12 19:38:29.161801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.424 [2024-12-12 19:38:29.161827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.424 pt2 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.424 [2024-12-12 19:38:29.172844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.424 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.425 "name": "raid_bdev1", 00:09:46.425 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:46.425 "strip_size_kb": 0, 00:09:46.425 "state": "configuring", 00:09:46.425 "raid_level": "raid1", 00:09:46.425 "superblock": true, 00:09:46.425 "num_base_bdevs": 3, 00:09:46.425 "num_base_bdevs_discovered": 1, 00:09:46.425 "num_base_bdevs_operational": 3, 00:09:46.425 "base_bdevs_list": [ 00:09:46.425 { 00:09:46.425 "name": "pt1", 00:09:46.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.425 "is_configured": true, 00:09:46.425 "data_offset": 2048, 00:09:46.425 "data_size": 63488 00:09:46.425 }, 00:09:46.425 { 00:09:46.425 "name": null, 00:09:46.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.425 "is_configured": false, 00:09:46.425 "data_offset": 0, 00:09:46.425 "data_size": 63488 00:09:46.425 }, 00:09:46.425 { 00:09:46.425 "name": null, 00:09:46.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.425 "is_configured": false, 00:09:46.425 "data_offset": 2048, 00:09:46.425 "data_size": 63488 00:09:46.425 } 00:09:46.425 ] 00:09:46.425 }' 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.425 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 [2024-12-12 19:38:29.656042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.001 [2024-12-12 19:38:29.656204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.001 [2024-12-12 19:38:29.656246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:47.001 [2024-12-12 19:38:29.656280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.001 [2024-12-12 19:38:29.656872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.001 [2024-12-12 19:38:29.656942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.001 [2024-12-12 19:38:29.657078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.001 [2024-12-12 19:38:29.657157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.001 pt2 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 [2024-12-12 19:38:29.668003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.001 [2024-12-12 19:38:29.668094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.001 [2024-12-12 19:38:29.668127] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:47.001 [2024-12-12 19:38:29.668156] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.001 [2024-12-12 19:38:29.668597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.001 [2024-12-12 19:38:29.668662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.001 [2024-12-12 19:38:29.668764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:47.001 [2024-12-12 19:38:29.668816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.001 [2024-12-12 19:38:29.668989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.001 [2024-12-12 19:38:29.669034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.001 [2024-12-12 19:38:29.669330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:47.001 [2024-12-12 19:38:29.669537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.001 [2024-12-12 19:38:29.669594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:47.001 [2024-12-12 19:38:29.669778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.001 pt3 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.001 "name": "raid_bdev1", 00:09:47.001 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:47.001 "strip_size_kb": 0, 00:09:47.001 "state": "online", 00:09:47.001 "raid_level": "raid1", 00:09:47.001 "superblock": true, 00:09:47.001 "num_base_bdevs": 3, 00:09:47.001 "num_base_bdevs_discovered": 3, 00:09:47.001 "num_base_bdevs_operational": 3, 00:09:47.001 "base_bdevs_list": [ 00:09:47.001 { 00:09:47.001 "name": "pt1", 00:09:47.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.001 "is_configured": true, 00:09:47.001 "data_offset": 2048, 00:09:47.001 "data_size": 63488 00:09:47.001 }, 00:09:47.001 { 00:09:47.001 "name": "pt2", 00:09:47.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.001 "is_configured": true, 00:09:47.001 "data_offset": 2048, 00:09:47.001 "data_size": 63488 00:09:47.001 }, 00:09:47.001 { 00:09:47.001 "name": "pt3", 00:09:47.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.001 "is_configured": true, 00:09:47.001 "data_offset": 2048, 00:09:47.001 "data_size": 63488 00:09:47.001 } 00:09:47.001 ] 00:09:47.001 }' 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.001 19:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.273 [2024-12-12 19:38:30.079618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.273 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.533 "name": "raid_bdev1", 00:09:47.533 "aliases": [ 00:09:47.533 "84cd1df6-0df9-4df9-9745-08fcface453f" 00:09:47.533 ], 00:09:47.533 "product_name": "Raid Volume", 00:09:47.533 "block_size": 512, 00:09:47.533 "num_blocks": 63488, 00:09:47.533 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:47.533 "assigned_rate_limits": { 00:09:47.533 "rw_ios_per_sec": 0, 00:09:47.533 "rw_mbytes_per_sec": 0, 00:09:47.533 "r_mbytes_per_sec": 0, 00:09:47.533 "w_mbytes_per_sec": 0 00:09:47.533 }, 00:09:47.533 "claimed": false, 00:09:47.533 "zoned": false, 00:09:47.533 "supported_io_types": { 00:09:47.533 "read": true, 00:09:47.533 "write": true, 00:09:47.533 "unmap": false, 00:09:47.533 "flush": false, 00:09:47.533 "reset": true, 00:09:47.533 "nvme_admin": false, 00:09:47.533 "nvme_io": false, 00:09:47.533 "nvme_io_md": false, 00:09:47.533 "write_zeroes": true, 00:09:47.533 "zcopy": false, 00:09:47.533 "get_zone_info": false, 00:09:47.533 "zone_management": false, 00:09:47.533 "zone_append": false, 00:09:47.533 "compare": false, 00:09:47.533 "compare_and_write": false, 00:09:47.533 "abort": false, 00:09:47.533 "seek_hole": false, 00:09:47.533 "seek_data": false, 00:09:47.533 "copy": false, 00:09:47.533 "nvme_iov_md": false 00:09:47.533 }, 00:09:47.533 "memory_domains": [ 00:09:47.533 { 00:09:47.533 "dma_device_id": "system", 00:09:47.533 "dma_device_type": 1 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.533 "dma_device_type": 2 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "dma_device_id": "system", 00:09:47.533 "dma_device_type": 1 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.533 "dma_device_type": 2 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "dma_device_id": "system", 00:09:47.533 "dma_device_type": 1 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.533 "dma_device_type": 2 00:09:47.533 } 00:09:47.533 ], 00:09:47.533 "driver_specific": { 00:09:47.533 "raid": { 00:09:47.533 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:47.533 "strip_size_kb": 0, 00:09:47.533 "state": "online", 00:09:47.533 "raid_level": "raid1", 00:09:47.533 "superblock": true, 00:09:47.533 "num_base_bdevs": 3, 00:09:47.533 "num_base_bdevs_discovered": 3, 00:09:47.533 "num_base_bdevs_operational": 3, 00:09:47.533 "base_bdevs_list": [ 00:09:47.533 { 00:09:47.533 "name": "pt1", 00:09:47.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.533 "is_configured": true, 00:09:47.533 "data_offset": 2048, 00:09:47.533 "data_size": 63488 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "name": "pt2", 00:09:47.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.533 "is_configured": true, 00:09:47.533 "data_offset": 2048, 00:09:47.533 "data_size": 63488 00:09:47.533 }, 00:09:47.533 { 00:09:47.533 "name": "pt3", 00:09:47.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.533 "is_configured": true, 00:09:47.533 "data_offset": 2048, 00:09:47.533 "data_size": 63488 00:09:47.533 } 00:09:47.533 ] 00:09:47.533 } 00:09:47.533 } 00:09:47.533 }' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.533 pt2 00:09:47.533 pt3' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 [2024-12-12 19:38:30.335160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 84cd1df6-0df9-4df9-9745-08fcface453f '!=' 84cd1df6-0df9-4df9-9745-08fcface453f ']' 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.533 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.793 [2024-12-12 19:38:30.382875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.793 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.794 "name": "raid_bdev1", 00:09:47.794 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:47.794 "strip_size_kb": 0, 00:09:47.794 "state": "online", 00:09:47.794 "raid_level": "raid1", 00:09:47.794 "superblock": true, 00:09:47.794 "num_base_bdevs": 3, 00:09:47.794 "num_base_bdevs_discovered": 2, 00:09:47.794 "num_base_bdevs_operational": 2, 00:09:47.794 "base_bdevs_list": [ 00:09:47.794 { 00:09:47.794 "name": null, 00:09:47.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.794 "is_configured": false, 00:09:47.794 "data_offset": 0, 00:09:47.794 "data_size": 63488 00:09:47.794 }, 00:09:47.794 { 00:09:47.794 "name": "pt2", 00:09:47.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.794 "is_configured": true, 00:09:47.794 "data_offset": 2048, 00:09:47.794 "data_size": 63488 00:09:47.794 }, 00:09:47.794 { 00:09:47.794 "name": "pt3", 00:09:47.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.794 "is_configured": true, 00:09:47.794 "data_offset": 2048, 00:09:47.794 "data_size": 63488 00:09:47.794 } 00:09:47.794 ] 00:09:47.794 }' 00:09:47.794 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.794 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.053 [2024-12-12 19:38:30.814169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.053 [2024-12-12 19:38:30.814268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.053 [2024-12-12 19:38:30.814380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.053 [2024-12-12 19:38:30.814446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.053 [2024-12-12 19:38:30.814462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.053 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.054 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.314 [2024-12-12 19:38:30.897966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.314 [2024-12-12 19:38:30.898075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.314 [2024-12-12 19:38:30.898096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:48.314 [2024-12-12 19:38:30.898108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.314 [2024-12-12 19:38:30.900706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.314 [2024-12-12 19:38:30.900746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.314 [2024-12-12 19:38:30.900830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.314 [2024-12-12 19:38:30.900883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.314 pt2 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.314 "name": "raid_bdev1", 00:09:48.314 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:48.314 "strip_size_kb": 0, 00:09:48.314 "state": "configuring", 00:09:48.314 "raid_level": "raid1", 00:09:48.314 "superblock": true, 00:09:48.314 "num_base_bdevs": 3, 00:09:48.314 "num_base_bdevs_discovered": 1, 00:09:48.314 "num_base_bdevs_operational": 2, 00:09:48.314 "base_bdevs_list": [ 00:09:48.314 { 00:09:48.314 "name": null, 00:09:48.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.314 "is_configured": false, 00:09:48.314 "data_offset": 2048, 00:09:48.314 "data_size": 63488 00:09:48.314 }, 00:09:48.314 { 00:09:48.314 "name": "pt2", 00:09:48.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.314 "is_configured": true, 00:09:48.314 "data_offset": 2048, 00:09:48.314 "data_size": 63488 00:09:48.314 }, 00:09:48.314 { 00:09:48.314 "name": null, 00:09:48.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.314 "is_configured": false, 00:09:48.314 "data_offset": 2048, 00:09:48.314 "data_size": 63488 00:09:48.314 } 00:09:48.314 ] 00:09:48.314 }' 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.314 19:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.574 [2024-12-12 19:38:31.373327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.574 [2024-12-12 19:38:31.373433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.574 [2024-12-12 19:38:31.373459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:48.574 [2024-12-12 19:38:31.373472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.574 [2024-12-12 19:38:31.374103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.574 [2024-12-12 19:38:31.374146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.574 [2024-12-12 19:38:31.374267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:48.574 [2024-12-12 19:38:31.374309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.574 [2024-12-12 19:38:31.374490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.574 [2024-12-12 19:38:31.374504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.574 pt3 00:09:48.574 [2024-12-12 19:38:31.374833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:48.574 [2024-12-12 19:38:31.375008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.574 [2024-12-12 19:38:31.375019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.574 [2024-12-12 19:38:31.375168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.574 "name": "raid_bdev1", 00:09:48.574 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:48.574 "strip_size_kb": 0, 00:09:48.574 "state": "online", 00:09:48.574 "raid_level": "raid1", 00:09:48.574 "superblock": true, 00:09:48.574 "num_base_bdevs": 3, 00:09:48.574 "num_base_bdevs_discovered": 2, 00:09:48.574 "num_base_bdevs_operational": 2, 00:09:48.574 "base_bdevs_list": [ 00:09:48.574 { 00:09:48.574 "name": null, 00:09:48.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.574 "is_configured": false, 00:09:48.574 "data_offset": 2048, 00:09:48.574 "data_size": 63488 00:09:48.574 }, 00:09:48.574 { 00:09:48.574 "name": "pt2", 00:09:48.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.574 "is_configured": true, 00:09:48.574 "data_offset": 2048, 00:09:48.574 "data_size": 63488 00:09:48.574 }, 00:09:48.574 { 00:09:48.574 "name": "pt3", 00:09:48.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.574 "is_configured": true, 00:09:48.574 "data_offset": 2048, 00:09:48.574 "data_size": 63488 00:09:48.574 } 00:09:48.574 ] 00:09:48.574 }' 00:09:48.574 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.833 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.092 [2024-12-12 19:38:31.824523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.092 [2024-12-12 19:38:31.824682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.092 [2024-12-12 19:38:31.824845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.092 [2024-12-12 19:38:31.824953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.092 [2024-12-12 19:38:31.824997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:49.092 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 [2024-12-12 19:38:31.896402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.093 [2024-12-12 19:38:31.896485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.093 [2024-12-12 19:38:31.896507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:49.093 [2024-12-12 19:38:31.896515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.093 [2024-12-12 19:38:31.899014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.093 [2024-12-12 19:38:31.899052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.093 [2024-12-12 19:38:31.899148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:49.093 [2024-12-12 19:38:31.899204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.093 [2024-12-12 19:38:31.899353] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:49.093 [2024-12-12 19:38:31.899363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.093 [2024-12-12 19:38:31.899379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:49.093 [2024-12-12 19:38:31.899438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.093 pt1 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.093 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.352 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.352 "name": "raid_bdev1", 00:09:49.352 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:49.352 "strip_size_kb": 0, 00:09:49.352 "state": "configuring", 00:09:49.352 "raid_level": "raid1", 00:09:49.352 "superblock": true, 00:09:49.352 "num_base_bdevs": 3, 00:09:49.352 "num_base_bdevs_discovered": 1, 00:09:49.352 "num_base_bdevs_operational": 2, 00:09:49.352 "base_bdevs_list": [ 00:09:49.352 { 00:09:49.352 "name": null, 00:09:49.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.352 "is_configured": false, 00:09:49.352 "data_offset": 2048, 00:09:49.352 "data_size": 63488 00:09:49.352 }, 00:09:49.352 { 00:09:49.352 "name": "pt2", 00:09:49.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.352 "is_configured": true, 00:09:49.352 "data_offset": 2048, 00:09:49.352 "data_size": 63488 00:09:49.352 }, 00:09:49.352 { 00:09:49.352 "name": null, 00:09:49.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.352 "is_configured": false, 00:09:49.352 "data_offset": 2048, 00:09:49.352 "data_size": 63488 00:09:49.352 } 00:09:49.352 ] 00:09:49.352 }' 00:09:49.352 19:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.352 19:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.612 [2024-12-12 19:38:32.343722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.612 [2024-12-12 19:38:32.343932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.612 [2024-12-12 19:38:32.344003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:49.612 [2024-12-12 19:38:32.344033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.612 [2024-12-12 19:38:32.344708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.612 [2024-12-12 19:38:32.344777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.612 [2024-12-12 19:38:32.344937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.612 [2024-12-12 19:38:32.344993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.612 [2024-12-12 19:38:32.345197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:49.612 [2024-12-12 19:38:32.345236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.612 [2024-12-12 19:38:32.345562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:49.612 [2024-12-12 19:38:32.345818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:49.612 [2024-12-12 19:38:32.345870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:49.612 [2024-12-12 19:38:32.346096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.612 pt3 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.612 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.612 "name": "raid_bdev1", 00:09:49.612 "uuid": "84cd1df6-0df9-4df9-9745-08fcface453f", 00:09:49.612 "strip_size_kb": 0, 00:09:49.612 "state": "online", 00:09:49.612 "raid_level": "raid1", 00:09:49.612 "superblock": true, 00:09:49.612 "num_base_bdevs": 3, 00:09:49.612 "num_base_bdevs_discovered": 2, 00:09:49.612 "num_base_bdevs_operational": 2, 00:09:49.612 "base_bdevs_list": [ 00:09:49.612 { 00:09:49.612 "name": null, 00:09:49.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.612 "is_configured": false, 00:09:49.612 "data_offset": 2048, 00:09:49.612 "data_size": 63488 00:09:49.612 }, 00:09:49.612 { 00:09:49.612 "name": "pt2", 00:09:49.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.612 "is_configured": true, 00:09:49.612 "data_offset": 2048, 00:09:49.612 "data_size": 63488 00:09:49.612 }, 00:09:49.612 { 00:09:49.612 "name": "pt3", 00:09:49.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.612 "is_configured": true, 00:09:49.612 "data_offset": 2048, 00:09:49.612 "data_size": 63488 00:09:49.612 } 00:09:49.613 ] 00:09:49.613 }' 00:09:49.613 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.613 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:50.182 [2024-12-12 19:38:32.887012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 84cd1df6-0df9-4df9-9745-08fcface453f '!=' 84cd1df6-0df9-4df9-9745-08fcface453f ']' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70338 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70338 ']' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70338 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70338 00:09:50.182 killing process with pid 70338 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70338' 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70338 00:09:50.182 [2024-12-12 19:38:32.973322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.182 19:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70338 00:09:50.182 [2024-12-12 19:38:32.973465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.182 [2024-12-12 19:38:32.973559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.182 [2024-12-12 19:38:32.973574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:50.751 [2024-12-12 19:38:33.297115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.687 19:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.687 00:09:51.687 real 0m7.925s 00:09:51.687 user 0m12.220s 00:09:51.687 sys 0m1.527s 00:09:51.687 ************************************ 00:09:51.687 END TEST raid_superblock_test 00:09:51.687 ************************************ 00:09:51.687 19:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.687 19:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.947 19:38:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:51.947 19:38:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.947 19:38:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.947 19:38:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.947 ************************************ 00:09:51.947 START TEST raid_read_error_test 00:09:51.947 ************************************ 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KD9UZu4YdC 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70784 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70784 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70784 ']' 00:09:51.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.947 19:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.947 [2024-12-12 19:38:34.672124] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:51.947 [2024-12-12 19:38:34.672238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:09:52.206 [2024-12-12 19:38:34.847462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.206 [2024-12-12 19:38:35.001927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.470 [2024-12-12 19:38:35.234495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.470 [2024-12-12 19:38:35.234541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.731 BaseBdev1_malloc 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.731 true 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.731 [2024-12-12 19:38:35.557490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.731 [2024-12-12 19:38:35.557594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.731 [2024-12-12 19:38:35.557631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.731 [2024-12-12 19:38:35.557644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.731 [2024-12-12 19:38:35.560143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.731 [2024-12-12 19:38:35.560187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.731 BaseBdev1 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.731 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 BaseBdev2_malloc 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 true 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 [2024-12-12 19:38:35.631172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.991 [2024-12-12 19:38:35.631231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.991 [2024-12-12 19:38:35.631247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.991 [2024-12-12 19:38:35.631258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.991 [2024-12-12 19:38:35.633500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.991 [2024-12-12 19:38:35.633538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.991 BaseBdev2 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 BaseBdev3_malloc 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 true 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 [2024-12-12 19:38:35.716928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.991 [2024-12-12 19:38:35.717063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.991 [2024-12-12 19:38:35.717083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.991 [2024-12-12 19:38:35.717095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.991 [2024-12-12 19:38:35.719425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.991 [2024-12-12 19:38:35.719466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.991 BaseBdev3 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.991 [2024-12-12 19:38:35.728984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.991 [2024-12-12 19:38:35.730984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.991 [2024-12-12 19:38:35.731056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.991 [2024-12-12 19:38:35.731253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.991 [2024-12-12 19:38:35.731265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.991 [2024-12-12 19:38:35.731497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:52.991 [2024-12-12 19:38:35.731679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.991 [2024-12-12 19:38:35.731691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:52.991 [2024-12-12 19:38:35.731834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.991 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.992 "name": "raid_bdev1", 00:09:52.992 "uuid": "55dacae0-f943-4422-8bb1-e5e68b0dcdf4", 00:09:52.992 "strip_size_kb": 0, 00:09:52.992 "state": "online", 00:09:52.992 "raid_level": "raid1", 00:09:52.992 "superblock": true, 00:09:52.992 "num_base_bdevs": 3, 00:09:52.992 "num_base_bdevs_discovered": 3, 00:09:52.992 "num_base_bdevs_operational": 3, 00:09:52.992 "base_bdevs_list": [ 00:09:52.992 { 00:09:52.992 "name": "BaseBdev1", 00:09:52.992 "uuid": "246cb281-596b-5fd2-9c9e-b6b02e1f864e", 00:09:52.992 "is_configured": true, 00:09:52.992 "data_offset": 2048, 00:09:52.992 "data_size": 63488 00:09:52.992 }, 00:09:52.992 { 00:09:52.992 "name": "BaseBdev2", 00:09:52.992 "uuid": "7f8ebfda-e2e6-5194-98e7-0fcb4f760eff", 00:09:52.992 "is_configured": true, 00:09:52.992 "data_offset": 2048, 00:09:52.992 "data_size": 63488 00:09:52.992 }, 00:09:52.992 { 00:09:52.992 "name": "BaseBdev3", 00:09:52.992 "uuid": "f573dc3c-6267-5044-ac72-91442c1dfb51", 00:09:52.992 "is_configured": true, 00:09:52.992 "data_offset": 2048, 00:09:52.992 "data_size": 63488 00:09:52.992 } 00:09:52.992 ] 00:09:52.992 }' 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.992 19:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.562 19:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.562 19:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.562 [2024-12-12 19:38:36.237859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.501 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.501 "name": "raid_bdev1", 00:09:54.501 "uuid": "55dacae0-f943-4422-8bb1-e5e68b0dcdf4", 00:09:54.501 "strip_size_kb": 0, 00:09:54.501 "state": "online", 00:09:54.501 "raid_level": "raid1", 00:09:54.501 "superblock": true, 00:09:54.501 "num_base_bdevs": 3, 00:09:54.501 "num_base_bdevs_discovered": 3, 00:09:54.501 "num_base_bdevs_operational": 3, 00:09:54.501 "base_bdevs_list": [ 00:09:54.501 { 00:09:54.501 "name": "BaseBdev1", 00:09:54.502 "uuid": "246cb281-596b-5fd2-9c9e-b6b02e1f864e", 00:09:54.502 "is_configured": true, 00:09:54.502 "data_offset": 2048, 00:09:54.502 "data_size": 63488 00:09:54.502 }, 00:09:54.502 { 00:09:54.502 "name": "BaseBdev2", 00:09:54.502 "uuid": "7f8ebfda-e2e6-5194-98e7-0fcb4f760eff", 00:09:54.502 "is_configured": true, 00:09:54.502 "data_offset": 2048, 00:09:54.502 "data_size": 63488 00:09:54.502 }, 00:09:54.502 { 00:09:54.502 "name": "BaseBdev3", 00:09:54.502 "uuid": "f573dc3c-6267-5044-ac72-91442c1dfb51", 00:09:54.502 "is_configured": true, 00:09:54.502 "data_offset": 2048, 00:09:54.502 "data_size": 63488 00:09:54.502 } 00:09:54.502 ] 00:09:54.502 }' 00:09:54.502 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.502 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.149 [2024-12-12 19:38:37.645257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.149 [2024-12-12 19:38:37.645308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.149 [2024-12-12 19:38:37.648106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.149 [2024-12-12 19:38:37.648168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.149 [2024-12-12 19:38:37.648280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.149 [2024-12-12 19:38:37.648297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:55.149 { 00:09:55.149 "results": [ 00:09:55.149 { 00:09:55.149 "job": "raid_bdev1", 00:09:55.149 "core_mask": "0x1", 00:09:55.149 "workload": "randrw", 00:09:55.149 "percentage": 50, 00:09:55.149 "status": "finished", 00:09:55.149 "queue_depth": 1, 00:09:55.149 "io_size": 131072, 00:09:55.149 "runtime": 1.408037, 00:09:55.149 "iops": 9989.794302280408, 00:09:55.149 "mibps": 1248.724287785051, 00:09:55.149 "io_failed": 0, 00:09:55.149 "io_timeout": 0, 00:09:55.149 "avg_latency_us": 97.50501373127433, 00:09:55.149 "min_latency_us": 22.91703056768559, 00:09:55.149 "max_latency_us": 1302.134497816594 00:09:55.149 } 00:09:55.149 ], 00:09:55.149 "core_count": 1 00:09:55.149 } 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70784 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70784 ']' 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70784 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70784 00:09:55.149 killing process with pid 70784 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70784' 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70784 00:09:55.149 [2024-12-12 19:38:37.681385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.149 19:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70784 00:09:55.149 [2024-12-12 19:38:37.926796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KD9UZu4YdC 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:56.533 00:09:56.533 real 0m4.607s 00:09:56.533 user 0m5.348s 00:09:56.533 sys 0m0.643s 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.533 19:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.533 ************************************ 00:09:56.533 END TEST raid_read_error_test 00:09:56.533 ************************************ 00:09:56.533 19:38:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:56.533 19:38:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.533 19:38:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.533 19:38:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.533 ************************************ 00:09:56.533 START TEST raid_write_error_test 00:09:56.533 ************************************ 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VUGTwhbPDy 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70932 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70932 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70932 ']' 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.533 19:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.533 [2024-12-12 19:38:39.348784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:56.534 [2024-12-12 19:38:39.348903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:09:56.793 [2024-12-12 19:38:39.520515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.053 [2024-12-12 19:38:39.653397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.053 [2024-12-12 19:38:39.879750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.053 [2024-12-12 19:38:39.879822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 BaseBdev1_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 true 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 [2024-12-12 19:38:40.249978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.622 [2024-12-12 19:38:40.250051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.622 [2024-12-12 19:38:40.250073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.622 [2024-12-12 19:38:40.250085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.622 [2024-12-12 19:38:40.252501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.622 [2024-12-12 19:38:40.252553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.622 BaseBdev1 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 BaseBdev2_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 true 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 [2024-12-12 19:38:40.326113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.622 [2024-12-12 19:38:40.326198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.622 [2024-12-12 19:38:40.326228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.622 [2024-12-12 19:38:40.326244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.622 [2024-12-12 19:38:40.329155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.622 [2024-12-12 19:38:40.329208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.622 BaseBdev2 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 BaseBdev3_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 true 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 [2024-12-12 19:38:40.409761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.622 [2024-12-12 19:38:40.409835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.622 [2024-12-12 19:38:40.409856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:57.622 [2024-12-12 19:38:40.409867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.622 [2024-12-12 19:38:40.412347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.622 [2024-12-12 19:38:40.412387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.622 BaseBdev3 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.622 [2024-12-12 19:38:40.421819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.622 [2024-12-12 19:38:40.423921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.622 [2024-12-12 19:38:40.424000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.622 [2024-12-12 19:38:40.424213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.622 [2024-12-12 19:38:40.424239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.622 [2024-12-12 19:38:40.424569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:57.622 [2024-12-12 19:38:40.424765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.622 [2024-12-12 19:38:40.424785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:57.622 [2024-12-12 19:38:40.424956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.622 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.882 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.882 "name": "raid_bdev1", 00:09:57.882 "uuid": "8080f1d1-4357-4c95-b42f-290373677dc7", 00:09:57.882 "strip_size_kb": 0, 00:09:57.882 "state": "online", 00:09:57.882 "raid_level": "raid1", 00:09:57.882 "superblock": true, 00:09:57.882 "num_base_bdevs": 3, 00:09:57.882 "num_base_bdevs_discovered": 3, 00:09:57.882 "num_base_bdevs_operational": 3, 00:09:57.882 "base_bdevs_list": [ 00:09:57.882 { 00:09:57.882 "name": "BaseBdev1", 00:09:57.882 "uuid": "029a4ef2-f5f8-52d8-a02a-9ecd897279e0", 00:09:57.882 "is_configured": true, 00:09:57.882 "data_offset": 2048, 00:09:57.882 "data_size": 63488 00:09:57.882 }, 00:09:57.882 { 00:09:57.882 "name": "BaseBdev2", 00:09:57.883 "uuid": "ebbe77e5-cd23-5ae1-876a-dbefca0f6810", 00:09:57.883 "is_configured": true, 00:09:57.883 "data_offset": 2048, 00:09:57.883 "data_size": 63488 00:09:57.883 }, 00:09:57.883 { 00:09:57.883 "name": "BaseBdev3", 00:09:57.883 "uuid": "1eff5633-f8df-5a7c-93fe-b22abbef280f", 00:09:57.883 "is_configured": true, 00:09:57.883 "data_offset": 2048, 00:09:57.883 "data_size": 63488 00:09:57.883 } 00:09:57.883 ] 00:09:57.883 }' 00:09:57.883 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.883 19:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.142 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.142 19:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.401 [2024-12-12 19:38:40.986413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.341 [2024-12-12 19:38:41.901624] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:59.341 [2024-12-12 19:38:41.901703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.341 [2024-12-12 19:38:41.901942] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.341 "name": "raid_bdev1", 00:09:59.341 "uuid": "8080f1d1-4357-4c95-b42f-290373677dc7", 00:09:59.341 "strip_size_kb": 0, 00:09:59.341 "state": "online", 00:09:59.341 "raid_level": "raid1", 00:09:59.341 "superblock": true, 00:09:59.341 "num_base_bdevs": 3, 00:09:59.341 "num_base_bdevs_discovered": 2, 00:09:59.341 "num_base_bdevs_operational": 2, 00:09:59.341 "base_bdevs_list": [ 00:09:59.341 { 00:09:59.341 "name": null, 00:09:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.341 "is_configured": false, 00:09:59.341 "data_offset": 0, 00:09:59.341 "data_size": 63488 00:09:59.341 }, 00:09:59.341 { 00:09:59.341 "name": "BaseBdev2", 00:09:59.341 "uuid": "ebbe77e5-cd23-5ae1-876a-dbefca0f6810", 00:09:59.341 "is_configured": true, 00:09:59.341 "data_offset": 2048, 00:09:59.341 "data_size": 63488 00:09:59.341 }, 00:09:59.341 { 00:09:59.341 "name": "BaseBdev3", 00:09:59.341 "uuid": "1eff5633-f8df-5a7c-93fe-b22abbef280f", 00:09:59.341 "is_configured": true, 00:09:59.341 "data_offset": 2048, 00:09:59.341 "data_size": 63488 00:09:59.341 } 00:09:59.341 ] 00:09:59.341 }' 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.341 19:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.601 [2024-12-12 19:38:42.316531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.601 [2024-12-12 19:38:42.316601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.601 [2024-12-12 19:38:42.319181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.601 [2024-12-12 19:38:42.319254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.601 [2024-12-12 19:38:42.319340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.601 [2024-12-12 19:38:42.319357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:59.601 { 00:09:59.601 "results": [ 00:09:59.601 { 00:09:59.601 "job": "raid_bdev1", 00:09:59.601 "core_mask": "0x1", 00:09:59.601 "workload": "randrw", 00:09:59.601 "percentage": 50, 00:09:59.601 "status": "finished", 00:09:59.601 "queue_depth": 1, 00:09:59.601 "io_size": 131072, 00:09:59.601 "runtime": 1.330536, 00:09:59.601 "iops": 11172.189253052906, 00:09:59.601 "mibps": 1396.5236566316132, 00:09:59.601 "io_failed": 0, 00:09:59.601 "io_timeout": 0, 00:09:59.601 "avg_latency_us": 86.8834295265835, 00:09:59.601 "min_latency_us": 23.699563318777294, 00:09:59.601 "max_latency_us": 1395.1441048034935 00:09:59.601 } 00:09:59.601 ], 00:09:59.601 "core_count": 1 00:09:59.601 } 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70932 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70932 ']' 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70932 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70932 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.601 killing process with pid 70932 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70932' 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70932 00:09:59.601 19:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70932 00:09:59.601 [2024-12-12 19:38:42.361354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.861 [2024-12-12 19:38:42.606550] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VUGTwhbPDy 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:01.242 00:10:01.242 real 0m4.635s 00:10:01.242 user 0m5.346s 00:10:01.242 sys 0m0.668s 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.242 19:38:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.242 ************************************ 00:10:01.242 END TEST raid_write_error_test 00:10:01.242 ************************************ 00:10:01.242 19:38:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:01.242 19:38:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:01.242 19:38:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:01.242 19:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.242 19:38:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.242 19:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.242 ************************************ 00:10:01.242 START TEST raid_state_function_test 00:10:01.242 ************************************ 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71075 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71075' 00:10:01.242 Process raid pid: 71075 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71075 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71075 ']' 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.242 19:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.242 [2024-12-12 19:38:44.042851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:01.242 [2024-12-12 19:38:44.043028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.502 [2024-12-12 19:38:44.201936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.502 [2024-12-12 19:38:44.331909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.762 [2024-12-12 19:38:44.570844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.762 [2024-12-12 19:38:44.570999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.331 [2024-12-12 19:38:44.874657] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.331 [2024-12-12 19:38:44.874720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.331 [2024-12-12 19:38:44.874736] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.331 [2024-12-12 19:38:44.874746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.331 [2024-12-12 19:38:44.874752] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.331 [2024-12-12 19:38:44.874761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.331 [2024-12-12 19:38:44.874766] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.331 [2024-12-12 19:38:44.874774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.331 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.332 "name": "Existed_Raid", 00:10:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.332 "strip_size_kb": 64, 00:10:02.332 "state": "configuring", 00:10:02.332 "raid_level": "raid0", 00:10:02.332 "superblock": false, 00:10:02.332 "num_base_bdevs": 4, 00:10:02.332 "num_base_bdevs_discovered": 0, 00:10:02.332 "num_base_bdevs_operational": 4, 00:10:02.332 "base_bdevs_list": [ 00:10:02.332 { 00:10:02.332 "name": "BaseBdev1", 00:10:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.332 "is_configured": false, 00:10:02.332 "data_offset": 0, 00:10:02.332 "data_size": 0 00:10:02.332 }, 00:10:02.332 { 00:10:02.332 "name": "BaseBdev2", 00:10:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.332 "is_configured": false, 00:10:02.332 "data_offset": 0, 00:10:02.332 "data_size": 0 00:10:02.332 }, 00:10:02.332 { 00:10:02.332 "name": "BaseBdev3", 00:10:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.332 "is_configured": false, 00:10:02.332 "data_offset": 0, 00:10:02.332 "data_size": 0 00:10:02.332 }, 00:10:02.332 { 00:10:02.332 "name": "BaseBdev4", 00:10:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.332 "is_configured": false, 00:10:02.332 "data_offset": 0, 00:10:02.332 "data_size": 0 00:10:02.332 } 00:10:02.332 ] 00:10:02.332 }' 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.332 19:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 [2024-12-12 19:38:45.253928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.591 [2024-12-12 19:38:45.254031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.592 [2024-12-12 19:38:45.265923] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.592 [2024-12-12 19:38:45.266006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.592 [2024-12-12 19:38:45.266032] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.592 [2024-12-12 19:38:45.266055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.592 [2024-12-12 19:38:45.266071] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.592 [2024-12-12 19:38:45.266092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.592 [2024-12-12 19:38:45.266107] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.592 [2024-12-12 19:38:45.266127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.592 [2024-12-12 19:38:45.319226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.592 BaseBdev1 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.592 [ 00:10:02.592 { 00:10:02.592 "name": "BaseBdev1", 00:10:02.592 "aliases": [ 00:10:02.592 "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1" 00:10:02.592 ], 00:10:02.592 "product_name": "Malloc disk", 00:10:02.592 "block_size": 512, 00:10:02.592 "num_blocks": 65536, 00:10:02.592 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:02.592 "assigned_rate_limits": { 00:10:02.592 "rw_ios_per_sec": 0, 00:10:02.592 "rw_mbytes_per_sec": 0, 00:10:02.592 "r_mbytes_per_sec": 0, 00:10:02.592 "w_mbytes_per_sec": 0 00:10:02.592 }, 00:10:02.592 "claimed": true, 00:10:02.592 "claim_type": "exclusive_write", 00:10:02.592 "zoned": false, 00:10:02.592 "supported_io_types": { 00:10:02.592 "read": true, 00:10:02.592 "write": true, 00:10:02.592 "unmap": true, 00:10:02.592 "flush": true, 00:10:02.592 "reset": true, 00:10:02.592 "nvme_admin": false, 00:10:02.592 "nvme_io": false, 00:10:02.592 "nvme_io_md": false, 00:10:02.592 "write_zeroes": true, 00:10:02.592 "zcopy": true, 00:10:02.592 "get_zone_info": false, 00:10:02.592 "zone_management": false, 00:10:02.592 "zone_append": false, 00:10:02.592 "compare": false, 00:10:02.592 "compare_and_write": false, 00:10:02.592 "abort": true, 00:10:02.592 "seek_hole": false, 00:10:02.592 "seek_data": false, 00:10:02.592 "copy": true, 00:10:02.592 "nvme_iov_md": false 00:10:02.592 }, 00:10:02.592 "memory_domains": [ 00:10:02.592 { 00:10:02.592 "dma_device_id": "system", 00:10:02.592 "dma_device_type": 1 00:10:02.592 }, 00:10:02.592 { 00:10:02.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.592 "dma_device_type": 2 00:10:02.592 } 00:10:02.592 ], 00:10:02.592 "driver_specific": {} 00:10:02.592 } 00:10:02.592 ] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.592 "name": "Existed_Raid", 00:10:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.592 "strip_size_kb": 64, 00:10:02.592 "state": "configuring", 00:10:02.592 "raid_level": "raid0", 00:10:02.592 "superblock": false, 00:10:02.592 "num_base_bdevs": 4, 00:10:02.592 "num_base_bdevs_discovered": 1, 00:10:02.592 "num_base_bdevs_operational": 4, 00:10:02.592 "base_bdevs_list": [ 00:10:02.592 { 00:10:02.592 "name": "BaseBdev1", 00:10:02.592 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:02.592 "is_configured": true, 00:10:02.592 "data_offset": 0, 00:10:02.592 "data_size": 65536 00:10:02.592 }, 00:10:02.592 { 00:10:02.592 "name": "BaseBdev2", 00:10:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.592 "is_configured": false, 00:10:02.592 "data_offset": 0, 00:10:02.592 "data_size": 0 00:10:02.592 }, 00:10:02.592 { 00:10:02.592 "name": "BaseBdev3", 00:10:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.592 "is_configured": false, 00:10:02.592 "data_offset": 0, 00:10:02.592 "data_size": 0 00:10:02.592 }, 00:10:02.592 { 00:10:02.592 "name": "BaseBdev4", 00:10:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.592 "is_configured": false, 00:10:02.592 "data_offset": 0, 00:10:02.592 "data_size": 0 00:10:02.592 } 00:10:02.592 ] 00:10:02.592 }' 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.592 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.162 [2024-12-12 19:38:45.806488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.162 [2024-12-12 19:38:45.806684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.162 [2024-12-12 19:38:45.814496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.162 [2024-12-12 19:38:45.816758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.162 [2024-12-12 19:38:45.816847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.162 [2024-12-12 19:38:45.816862] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.162 [2024-12-12 19:38:45.816873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.162 [2024-12-12 19:38:45.816879] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.162 [2024-12-12 19:38:45.816888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.162 "name": "Existed_Raid", 00:10:03.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.162 "strip_size_kb": 64, 00:10:03.162 "state": "configuring", 00:10:03.162 "raid_level": "raid0", 00:10:03.162 "superblock": false, 00:10:03.162 "num_base_bdevs": 4, 00:10:03.162 "num_base_bdevs_discovered": 1, 00:10:03.162 "num_base_bdevs_operational": 4, 00:10:03.162 "base_bdevs_list": [ 00:10:03.162 { 00:10:03.162 "name": "BaseBdev1", 00:10:03.162 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:03.162 "is_configured": true, 00:10:03.162 "data_offset": 0, 00:10:03.162 "data_size": 65536 00:10:03.162 }, 00:10:03.162 { 00:10:03.162 "name": "BaseBdev2", 00:10:03.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.162 "is_configured": false, 00:10:03.162 "data_offset": 0, 00:10:03.162 "data_size": 0 00:10:03.162 }, 00:10:03.162 { 00:10:03.162 "name": "BaseBdev3", 00:10:03.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.162 "is_configured": false, 00:10:03.162 "data_offset": 0, 00:10:03.162 "data_size": 0 00:10:03.162 }, 00:10:03.162 { 00:10:03.162 "name": "BaseBdev4", 00:10:03.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.162 "is_configured": false, 00:10:03.162 "data_offset": 0, 00:10:03.162 "data_size": 0 00:10:03.162 } 00:10:03.162 ] 00:10:03.162 }' 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.162 19:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.422 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.422 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.422 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.682 [2024-12-12 19:38:46.304678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.682 BaseBdev2 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.682 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.682 [ 00:10:03.682 { 00:10:03.682 "name": "BaseBdev2", 00:10:03.682 "aliases": [ 00:10:03.682 "96160e66-0b5b-4c8a-9006-fdf8a8dc5768" 00:10:03.682 ], 00:10:03.682 "product_name": "Malloc disk", 00:10:03.682 "block_size": 512, 00:10:03.682 "num_blocks": 65536, 00:10:03.682 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:03.682 "assigned_rate_limits": { 00:10:03.682 "rw_ios_per_sec": 0, 00:10:03.682 "rw_mbytes_per_sec": 0, 00:10:03.682 "r_mbytes_per_sec": 0, 00:10:03.682 "w_mbytes_per_sec": 0 00:10:03.682 }, 00:10:03.682 "claimed": true, 00:10:03.682 "claim_type": "exclusive_write", 00:10:03.682 "zoned": false, 00:10:03.682 "supported_io_types": { 00:10:03.682 "read": true, 00:10:03.682 "write": true, 00:10:03.682 "unmap": true, 00:10:03.682 "flush": true, 00:10:03.682 "reset": true, 00:10:03.682 "nvme_admin": false, 00:10:03.682 "nvme_io": false, 00:10:03.682 "nvme_io_md": false, 00:10:03.682 "write_zeroes": true, 00:10:03.682 "zcopy": true, 00:10:03.682 "get_zone_info": false, 00:10:03.682 "zone_management": false, 00:10:03.682 "zone_append": false, 00:10:03.682 "compare": false, 00:10:03.682 "compare_and_write": false, 00:10:03.682 "abort": true, 00:10:03.682 "seek_hole": false, 00:10:03.682 "seek_data": false, 00:10:03.682 "copy": true, 00:10:03.682 "nvme_iov_md": false 00:10:03.682 }, 00:10:03.682 "memory_domains": [ 00:10:03.682 { 00:10:03.682 "dma_device_id": "system", 00:10:03.682 "dma_device_type": 1 00:10:03.682 }, 00:10:03.682 { 00:10:03.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.682 "dma_device_type": 2 00:10:03.682 } 00:10:03.682 ], 00:10:03.683 "driver_specific": {} 00:10:03.683 } 00:10:03.683 ] 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.683 "name": "Existed_Raid", 00:10:03.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.683 "strip_size_kb": 64, 00:10:03.683 "state": "configuring", 00:10:03.683 "raid_level": "raid0", 00:10:03.683 "superblock": false, 00:10:03.683 "num_base_bdevs": 4, 00:10:03.683 "num_base_bdevs_discovered": 2, 00:10:03.683 "num_base_bdevs_operational": 4, 00:10:03.683 "base_bdevs_list": [ 00:10:03.683 { 00:10:03.683 "name": "BaseBdev1", 00:10:03.683 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:03.683 "is_configured": true, 00:10:03.683 "data_offset": 0, 00:10:03.683 "data_size": 65536 00:10:03.683 }, 00:10:03.683 { 00:10:03.683 "name": "BaseBdev2", 00:10:03.683 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:03.683 "is_configured": true, 00:10:03.683 "data_offset": 0, 00:10:03.683 "data_size": 65536 00:10:03.683 }, 00:10:03.683 { 00:10:03.683 "name": "BaseBdev3", 00:10:03.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.683 "is_configured": false, 00:10:03.683 "data_offset": 0, 00:10:03.683 "data_size": 0 00:10:03.683 }, 00:10:03.683 { 00:10:03.683 "name": "BaseBdev4", 00:10:03.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.683 "is_configured": false, 00:10:03.683 "data_offset": 0, 00:10:03.683 "data_size": 0 00:10:03.683 } 00:10:03.683 ] 00:10:03.683 }' 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.683 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.943 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.943 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.943 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.203 [2024-12-12 19:38:46.840700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.203 BaseBdev3 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.203 [ 00:10:04.203 { 00:10:04.203 "name": "BaseBdev3", 00:10:04.203 "aliases": [ 00:10:04.203 "2d45b382-8080-46bc-bd37-5408f8391796" 00:10:04.203 ], 00:10:04.203 "product_name": "Malloc disk", 00:10:04.203 "block_size": 512, 00:10:04.203 "num_blocks": 65536, 00:10:04.203 "uuid": "2d45b382-8080-46bc-bd37-5408f8391796", 00:10:04.203 "assigned_rate_limits": { 00:10:04.203 "rw_ios_per_sec": 0, 00:10:04.203 "rw_mbytes_per_sec": 0, 00:10:04.203 "r_mbytes_per_sec": 0, 00:10:04.203 "w_mbytes_per_sec": 0 00:10:04.203 }, 00:10:04.203 "claimed": true, 00:10:04.203 "claim_type": "exclusive_write", 00:10:04.203 "zoned": false, 00:10:04.203 "supported_io_types": { 00:10:04.203 "read": true, 00:10:04.203 "write": true, 00:10:04.203 "unmap": true, 00:10:04.203 "flush": true, 00:10:04.203 "reset": true, 00:10:04.203 "nvme_admin": false, 00:10:04.203 "nvme_io": false, 00:10:04.203 "nvme_io_md": false, 00:10:04.203 "write_zeroes": true, 00:10:04.203 "zcopy": true, 00:10:04.203 "get_zone_info": false, 00:10:04.203 "zone_management": false, 00:10:04.203 "zone_append": false, 00:10:04.203 "compare": false, 00:10:04.203 "compare_and_write": false, 00:10:04.203 "abort": true, 00:10:04.203 "seek_hole": false, 00:10:04.203 "seek_data": false, 00:10:04.203 "copy": true, 00:10:04.203 "nvme_iov_md": false 00:10:04.203 }, 00:10:04.203 "memory_domains": [ 00:10:04.203 { 00:10:04.203 "dma_device_id": "system", 00:10:04.203 "dma_device_type": 1 00:10:04.203 }, 00:10:04.203 { 00:10:04.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.203 "dma_device_type": 2 00:10:04.203 } 00:10:04.203 ], 00:10:04.203 "driver_specific": {} 00:10:04.203 } 00:10:04.203 ] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.203 "name": "Existed_Raid", 00:10:04.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.203 "strip_size_kb": 64, 00:10:04.203 "state": "configuring", 00:10:04.203 "raid_level": "raid0", 00:10:04.203 "superblock": false, 00:10:04.203 "num_base_bdevs": 4, 00:10:04.203 "num_base_bdevs_discovered": 3, 00:10:04.203 "num_base_bdevs_operational": 4, 00:10:04.203 "base_bdevs_list": [ 00:10:04.203 { 00:10:04.203 "name": "BaseBdev1", 00:10:04.203 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:04.203 "is_configured": true, 00:10:04.203 "data_offset": 0, 00:10:04.203 "data_size": 65536 00:10:04.203 }, 00:10:04.203 { 00:10:04.203 "name": "BaseBdev2", 00:10:04.203 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:04.203 "is_configured": true, 00:10:04.203 "data_offset": 0, 00:10:04.203 "data_size": 65536 00:10:04.203 }, 00:10:04.203 { 00:10:04.203 "name": "BaseBdev3", 00:10:04.203 "uuid": "2d45b382-8080-46bc-bd37-5408f8391796", 00:10:04.203 "is_configured": true, 00:10:04.203 "data_offset": 0, 00:10:04.203 "data_size": 65536 00:10:04.203 }, 00:10:04.203 { 00:10:04.203 "name": "BaseBdev4", 00:10:04.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.203 "is_configured": false, 00:10:04.203 "data_offset": 0, 00:10:04.203 "data_size": 0 00:10:04.203 } 00:10:04.203 ] 00:10:04.203 }' 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.203 19:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.463 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.463 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.463 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 [2024-12-12 19:38:47.344003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.723 [2024-12-12 19:38:47.344057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.723 [2024-12-12 19:38:47.344067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:04.723 [2024-12-12 19:38:47.344356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.723 [2024-12-12 19:38:47.344545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.723 [2024-12-12 19:38:47.344584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.723 [2024-12-12 19:38:47.344880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.723 BaseBdev4 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.723 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 [ 00:10:04.723 { 00:10:04.723 "name": "BaseBdev4", 00:10:04.723 "aliases": [ 00:10:04.723 "81075fb9-f3e2-4e95-8ffb-051811140d28" 00:10:04.723 ], 00:10:04.723 "product_name": "Malloc disk", 00:10:04.723 "block_size": 512, 00:10:04.723 "num_blocks": 65536, 00:10:04.723 "uuid": "81075fb9-f3e2-4e95-8ffb-051811140d28", 00:10:04.723 "assigned_rate_limits": { 00:10:04.723 "rw_ios_per_sec": 0, 00:10:04.723 "rw_mbytes_per_sec": 0, 00:10:04.723 "r_mbytes_per_sec": 0, 00:10:04.723 "w_mbytes_per_sec": 0 00:10:04.723 }, 00:10:04.723 "claimed": true, 00:10:04.723 "claim_type": "exclusive_write", 00:10:04.723 "zoned": false, 00:10:04.723 "supported_io_types": { 00:10:04.723 "read": true, 00:10:04.723 "write": true, 00:10:04.723 "unmap": true, 00:10:04.723 "flush": true, 00:10:04.723 "reset": true, 00:10:04.723 "nvme_admin": false, 00:10:04.723 "nvme_io": false, 00:10:04.723 "nvme_io_md": false, 00:10:04.723 "write_zeroes": true, 00:10:04.723 "zcopy": true, 00:10:04.723 "get_zone_info": false, 00:10:04.723 "zone_management": false, 00:10:04.723 "zone_append": false, 00:10:04.724 "compare": false, 00:10:04.724 "compare_and_write": false, 00:10:04.724 "abort": true, 00:10:04.724 "seek_hole": false, 00:10:04.724 "seek_data": false, 00:10:04.724 "copy": true, 00:10:04.724 "nvme_iov_md": false 00:10:04.724 }, 00:10:04.724 "memory_domains": [ 00:10:04.724 { 00:10:04.724 "dma_device_id": "system", 00:10:04.724 "dma_device_type": 1 00:10:04.724 }, 00:10:04.724 { 00:10:04.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.724 "dma_device_type": 2 00:10:04.724 } 00:10:04.724 ], 00:10:04.724 "driver_specific": {} 00:10:04.724 } 00:10:04.724 ] 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.724 "name": "Existed_Raid", 00:10:04.724 "uuid": "38ce9a65-9c21-41dd-aa55-5141fafe04e8", 00:10:04.724 "strip_size_kb": 64, 00:10:04.724 "state": "online", 00:10:04.724 "raid_level": "raid0", 00:10:04.724 "superblock": false, 00:10:04.724 "num_base_bdevs": 4, 00:10:04.724 "num_base_bdevs_discovered": 4, 00:10:04.724 "num_base_bdevs_operational": 4, 00:10:04.724 "base_bdevs_list": [ 00:10:04.724 { 00:10:04.724 "name": "BaseBdev1", 00:10:04.724 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:04.724 "is_configured": true, 00:10:04.724 "data_offset": 0, 00:10:04.724 "data_size": 65536 00:10:04.724 }, 00:10:04.724 { 00:10:04.724 "name": "BaseBdev2", 00:10:04.724 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:04.724 "is_configured": true, 00:10:04.724 "data_offset": 0, 00:10:04.724 "data_size": 65536 00:10:04.724 }, 00:10:04.724 { 00:10:04.724 "name": "BaseBdev3", 00:10:04.724 "uuid": "2d45b382-8080-46bc-bd37-5408f8391796", 00:10:04.724 "is_configured": true, 00:10:04.724 "data_offset": 0, 00:10:04.724 "data_size": 65536 00:10:04.724 }, 00:10:04.724 { 00:10:04.724 "name": "BaseBdev4", 00:10:04.724 "uuid": "81075fb9-f3e2-4e95-8ffb-051811140d28", 00:10:04.724 "is_configured": true, 00:10:04.724 "data_offset": 0, 00:10:04.724 "data_size": 65536 00:10:04.724 } 00:10:04.724 ] 00:10:04.724 }' 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.724 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 [2024-12-12 19:38:47.847611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.294 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.294 "name": "Existed_Raid", 00:10:05.294 "aliases": [ 00:10:05.294 "38ce9a65-9c21-41dd-aa55-5141fafe04e8" 00:10:05.294 ], 00:10:05.294 "product_name": "Raid Volume", 00:10:05.294 "block_size": 512, 00:10:05.294 "num_blocks": 262144, 00:10:05.294 "uuid": "38ce9a65-9c21-41dd-aa55-5141fafe04e8", 00:10:05.294 "assigned_rate_limits": { 00:10:05.294 "rw_ios_per_sec": 0, 00:10:05.294 "rw_mbytes_per_sec": 0, 00:10:05.294 "r_mbytes_per_sec": 0, 00:10:05.294 "w_mbytes_per_sec": 0 00:10:05.294 }, 00:10:05.294 "claimed": false, 00:10:05.294 "zoned": false, 00:10:05.294 "supported_io_types": { 00:10:05.294 "read": true, 00:10:05.294 "write": true, 00:10:05.294 "unmap": true, 00:10:05.294 "flush": true, 00:10:05.294 "reset": true, 00:10:05.294 "nvme_admin": false, 00:10:05.294 "nvme_io": false, 00:10:05.294 "nvme_io_md": false, 00:10:05.294 "write_zeroes": true, 00:10:05.294 "zcopy": false, 00:10:05.294 "get_zone_info": false, 00:10:05.294 "zone_management": false, 00:10:05.294 "zone_append": false, 00:10:05.294 "compare": false, 00:10:05.294 "compare_and_write": false, 00:10:05.294 "abort": false, 00:10:05.294 "seek_hole": false, 00:10:05.294 "seek_data": false, 00:10:05.294 "copy": false, 00:10:05.294 "nvme_iov_md": false 00:10:05.294 }, 00:10:05.294 "memory_domains": [ 00:10:05.294 { 00:10:05.294 "dma_device_id": "system", 00:10:05.294 "dma_device_type": 1 00:10:05.294 }, 00:10:05.294 { 00:10:05.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.294 "dma_device_type": 2 00:10:05.294 }, 00:10:05.294 { 00:10:05.294 "dma_device_id": "system", 00:10:05.294 "dma_device_type": 1 00:10:05.294 }, 00:10:05.294 { 00:10:05.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.294 "dma_device_type": 2 00:10:05.294 }, 00:10:05.294 { 00:10:05.294 "dma_device_id": "system", 00:10:05.294 "dma_device_type": 1 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.295 "dma_device_type": 2 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "dma_device_id": "system", 00:10:05.295 "dma_device_type": 1 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.295 "dma_device_type": 2 00:10:05.295 } 00:10:05.295 ], 00:10:05.295 "driver_specific": { 00:10:05.295 "raid": { 00:10:05.295 "uuid": "38ce9a65-9c21-41dd-aa55-5141fafe04e8", 00:10:05.295 "strip_size_kb": 64, 00:10:05.295 "state": "online", 00:10:05.295 "raid_level": "raid0", 00:10:05.295 "superblock": false, 00:10:05.295 "num_base_bdevs": 4, 00:10:05.295 "num_base_bdevs_discovered": 4, 00:10:05.295 "num_base_bdevs_operational": 4, 00:10:05.295 "base_bdevs_list": [ 00:10:05.295 { 00:10:05.295 "name": "BaseBdev1", 00:10:05.295 "uuid": "1eb3a08e-0166-4e9f-a9c8-4da96eba8ef1", 00:10:05.295 "is_configured": true, 00:10:05.295 "data_offset": 0, 00:10:05.295 "data_size": 65536 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "name": "BaseBdev2", 00:10:05.295 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:05.295 "is_configured": true, 00:10:05.295 "data_offset": 0, 00:10:05.295 "data_size": 65536 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "name": "BaseBdev3", 00:10:05.295 "uuid": "2d45b382-8080-46bc-bd37-5408f8391796", 00:10:05.295 "is_configured": true, 00:10:05.295 "data_offset": 0, 00:10:05.295 "data_size": 65536 00:10:05.295 }, 00:10:05.295 { 00:10:05.295 "name": "BaseBdev4", 00:10:05.295 "uuid": "81075fb9-f3e2-4e95-8ffb-051811140d28", 00:10:05.295 "is_configured": true, 00:10:05.295 "data_offset": 0, 00:10:05.295 "data_size": 65536 00:10:05.295 } 00:10:05.295 ] 00:10:05.295 } 00:10:05.295 } 00:10:05.295 }' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.295 BaseBdev2 00:10:05.295 BaseBdev3 00:10:05.295 BaseBdev4' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.295 19:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.295 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.295 [2024-12-12 19:38:48.134752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.295 [2024-12-12 19:38:48.134867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.295 [2024-12-12 19:38:48.134949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.555 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.555 "name": "Existed_Raid", 00:10:05.555 "uuid": "38ce9a65-9c21-41dd-aa55-5141fafe04e8", 00:10:05.555 "strip_size_kb": 64, 00:10:05.555 "state": "offline", 00:10:05.555 "raid_level": "raid0", 00:10:05.555 "superblock": false, 00:10:05.555 "num_base_bdevs": 4, 00:10:05.555 "num_base_bdevs_discovered": 3, 00:10:05.555 "num_base_bdevs_operational": 3, 00:10:05.555 "base_bdevs_list": [ 00:10:05.555 { 00:10:05.555 "name": null, 00:10:05.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.555 "is_configured": false, 00:10:05.555 "data_offset": 0, 00:10:05.555 "data_size": 65536 00:10:05.555 }, 00:10:05.555 { 00:10:05.556 "name": "BaseBdev2", 00:10:05.556 "uuid": "96160e66-0b5b-4c8a-9006-fdf8a8dc5768", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 }, 00:10:05.556 { 00:10:05.556 "name": "BaseBdev3", 00:10:05.556 "uuid": "2d45b382-8080-46bc-bd37-5408f8391796", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 }, 00:10:05.556 { 00:10:05.556 "name": "BaseBdev4", 00:10:05.556 "uuid": "81075fb9-f3e2-4e95-8ffb-051811140d28", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 } 00:10:05.556 ] 00:10:05.556 }' 00:10:05.556 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.556 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.125 [2024-12-12 19:38:48.723694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.125 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.125 [2024-12-12 19:38:48.871179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.385 19:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.385 [2024-12-12 19:38:49.025660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.385 [2024-12-12 19:38:49.025732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.385 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.386 BaseBdev2 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.386 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 [ 00:10:06.646 { 00:10:06.646 "name": "BaseBdev2", 00:10:06.646 "aliases": [ 00:10:06.646 "3011f463-f0d4-48e4-a037-a16ae1005325" 00:10:06.646 ], 00:10:06.646 "product_name": "Malloc disk", 00:10:06.646 "block_size": 512, 00:10:06.646 "num_blocks": 65536, 00:10:06.646 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:06.646 "assigned_rate_limits": { 00:10:06.646 "rw_ios_per_sec": 0, 00:10:06.646 "rw_mbytes_per_sec": 0, 00:10:06.646 "r_mbytes_per_sec": 0, 00:10:06.646 "w_mbytes_per_sec": 0 00:10:06.646 }, 00:10:06.646 "claimed": false, 00:10:06.646 "zoned": false, 00:10:06.646 "supported_io_types": { 00:10:06.646 "read": true, 00:10:06.646 "write": true, 00:10:06.646 "unmap": true, 00:10:06.646 "flush": true, 00:10:06.646 "reset": true, 00:10:06.646 "nvme_admin": false, 00:10:06.646 "nvme_io": false, 00:10:06.646 "nvme_io_md": false, 00:10:06.646 "write_zeroes": true, 00:10:06.646 "zcopy": true, 00:10:06.646 "get_zone_info": false, 00:10:06.646 "zone_management": false, 00:10:06.646 "zone_append": false, 00:10:06.646 "compare": false, 00:10:06.646 "compare_and_write": false, 00:10:06.646 "abort": true, 00:10:06.646 "seek_hole": false, 00:10:06.646 "seek_data": false, 00:10:06.646 "copy": true, 00:10:06.646 "nvme_iov_md": false 00:10:06.646 }, 00:10:06.646 "memory_domains": [ 00:10:06.646 { 00:10:06.646 "dma_device_id": "system", 00:10:06.646 "dma_device_type": 1 00:10:06.646 }, 00:10:06.646 { 00:10:06.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.646 "dma_device_type": 2 00:10:06.646 } 00:10:06.646 ], 00:10:06.646 "driver_specific": {} 00:10:06.646 } 00:10:06.646 ] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 BaseBdev3 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.646 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 [ 00:10:06.646 { 00:10:06.646 "name": "BaseBdev3", 00:10:06.646 "aliases": [ 00:10:06.646 "fd36f860-7726-4c36-85ea-5ab1531bc584" 00:10:06.646 ], 00:10:06.646 "product_name": "Malloc disk", 00:10:06.646 "block_size": 512, 00:10:06.646 "num_blocks": 65536, 00:10:06.646 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:06.646 "assigned_rate_limits": { 00:10:06.646 "rw_ios_per_sec": 0, 00:10:06.646 "rw_mbytes_per_sec": 0, 00:10:06.646 "r_mbytes_per_sec": 0, 00:10:06.646 "w_mbytes_per_sec": 0 00:10:06.646 }, 00:10:06.646 "claimed": false, 00:10:06.646 "zoned": false, 00:10:06.646 "supported_io_types": { 00:10:06.646 "read": true, 00:10:06.646 "write": true, 00:10:06.646 "unmap": true, 00:10:06.646 "flush": true, 00:10:06.646 "reset": true, 00:10:06.646 "nvme_admin": false, 00:10:06.646 "nvme_io": false, 00:10:06.646 "nvme_io_md": false, 00:10:06.646 "write_zeroes": true, 00:10:06.646 "zcopy": true, 00:10:06.646 "get_zone_info": false, 00:10:06.646 "zone_management": false, 00:10:06.646 "zone_append": false, 00:10:06.646 "compare": false, 00:10:06.647 "compare_and_write": false, 00:10:06.647 "abort": true, 00:10:06.647 "seek_hole": false, 00:10:06.647 "seek_data": false, 00:10:06.647 "copy": true, 00:10:06.647 "nvme_iov_md": false 00:10:06.647 }, 00:10:06.647 "memory_domains": [ 00:10:06.647 { 00:10:06.647 "dma_device_id": "system", 00:10:06.647 "dma_device_type": 1 00:10:06.647 }, 00:10:06.647 { 00:10:06.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.647 "dma_device_type": 2 00:10:06.647 } 00:10:06.647 ], 00:10:06.647 "driver_specific": {} 00:10:06.647 } 00:10:06.647 ] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 BaseBdev4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 [ 00:10:06.647 { 00:10:06.647 "name": "BaseBdev4", 00:10:06.647 "aliases": [ 00:10:06.647 "678083d4-aea5-4e8e-9090-f944e9fc5de7" 00:10:06.647 ], 00:10:06.647 "product_name": "Malloc disk", 00:10:06.647 "block_size": 512, 00:10:06.647 "num_blocks": 65536, 00:10:06.647 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:06.647 "assigned_rate_limits": { 00:10:06.647 "rw_ios_per_sec": 0, 00:10:06.647 "rw_mbytes_per_sec": 0, 00:10:06.647 "r_mbytes_per_sec": 0, 00:10:06.647 "w_mbytes_per_sec": 0 00:10:06.647 }, 00:10:06.647 "claimed": false, 00:10:06.647 "zoned": false, 00:10:06.647 "supported_io_types": { 00:10:06.647 "read": true, 00:10:06.647 "write": true, 00:10:06.647 "unmap": true, 00:10:06.647 "flush": true, 00:10:06.647 "reset": true, 00:10:06.647 "nvme_admin": false, 00:10:06.647 "nvme_io": false, 00:10:06.647 "nvme_io_md": false, 00:10:06.647 "write_zeroes": true, 00:10:06.647 "zcopy": true, 00:10:06.647 "get_zone_info": false, 00:10:06.647 "zone_management": false, 00:10:06.647 "zone_append": false, 00:10:06.647 "compare": false, 00:10:06.647 "compare_and_write": false, 00:10:06.647 "abort": true, 00:10:06.647 "seek_hole": false, 00:10:06.647 "seek_data": false, 00:10:06.647 "copy": true, 00:10:06.647 "nvme_iov_md": false 00:10:06.647 }, 00:10:06.647 "memory_domains": [ 00:10:06.647 { 00:10:06.647 "dma_device_id": "system", 00:10:06.647 "dma_device_type": 1 00:10:06.647 }, 00:10:06.647 { 00:10:06.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.647 "dma_device_type": 2 00:10:06.647 } 00:10:06.647 ], 00:10:06.647 "driver_specific": {} 00:10:06.647 } 00:10:06.647 ] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 [2024-12-12 19:38:49.441583] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.647 [2024-12-12 19:38:49.441730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.647 [2024-12-12 19:38:49.441775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.647 [2024-12-12 19:38:49.443933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.647 [2024-12-12 19:38:49.444038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.907 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.907 "name": "Existed_Raid", 00:10:06.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.907 "strip_size_kb": 64, 00:10:06.907 "state": "configuring", 00:10:06.907 "raid_level": "raid0", 00:10:06.907 "superblock": false, 00:10:06.907 "num_base_bdevs": 4, 00:10:06.907 "num_base_bdevs_discovered": 3, 00:10:06.907 "num_base_bdevs_operational": 4, 00:10:06.907 "base_bdevs_list": [ 00:10:06.907 { 00:10:06.907 "name": "BaseBdev1", 00:10:06.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.907 "is_configured": false, 00:10:06.907 "data_offset": 0, 00:10:06.907 "data_size": 0 00:10:06.907 }, 00:10:06.907 { 00:10:06.907 "name": "BaseBdev2", 00:10:06.907 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:06.907 "is_configured": true, 00:10:06.907 "data_offset": 0, 00:10:06.907 "data_size": 65536 00:10:06.907 }, 00:10:06.907 { 00:10:06.907 "name": "BaseBdev3", 00:10:06.907 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:06.907 "is_configured": true, 00:10:06.907 "data_offset": 0, 00:10:06.907 "data_size": 65536 00:10:06.907 }, 00:10:06.907 { 00:10:06.907 "name": "BaseBdev4", 00:10:06.907 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:06.907 "is_configured": true, 00:10:06.907 "data_offset": 0, 00:10:06.907 "data_size": 65536 00:10:06.907 } 00:10:06.907 ] 00:10:06.907 }' 00:10:06.907 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.907 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.167 [2024-12-12 19:38:49.896852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.167 "name": "Existed_Raid", 00:10:07.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.167 "strip_size_kb": 64, 00:10:07.167 "state": "configuring", 00:10:07.167 "raid_level": "raid0", 00:10:07.167 "superblock": false, 00:10:07.167 "num_base_bdevs": 4, 00:10:07.167 "num_base_bdevs_discovered": 2, 00:10:07.167 "num_base_bdevs_operational": 4, 00:10:07.167 "base_bdevs_list": [ 00:10:07.167 { 00:10:07.167 "name": "BaseBdev1", 00:10:07.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.167 "is_configured": false, 00:10:07.167 "data_offset": 0, 00:10:07.167 "data_size": 0 00:10:07.167 }, 00:10:07.167 { 00:10:07.167 "name": null, 00:10:07.167 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:07.167 "is_configured": false, 00:10:07.167 "data_offset": 0, 00:10:07.167 "data_size": 65536 00:10:07.167 }, 00:10:07.167 { 00:10:07.167 "name": "BaseBdev3", 00:10:07.167 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:07.167 "is_configured": true, 00:10:07.167 "data_offset": 0, 00:10:07.167 "data_size": 65536 00:10:07.167 }, 00:10:07.167 { 00:10:07.167 "name": "BaseBdev4", 00:10:07.167 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:07.167 "is_configured": true, 00:10:07.167 "data_offset": 0, 00:10:07.167 "data_size": 65536 00:10:07.167 } 00:10:07.167 ] 00:10:07.167 }' 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.167 19:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.427 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.427 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.427 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.427 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 [2024-12-12 19:38:50.345774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.687 BaseBdev1 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 [ 00:10:07.687 { 00:10:07.687 "name": "BaseBdev1", 00:10:07.687 "aliases": [ 00:10:07.687 "bc71eb61-5261-4774-99c7-1bede08ad2db" 00:10:07.687 ], 00:10:07.687 "product_name": "Malloc disk", 00:10:07.687 "block_size": 512, 00:10:07.687 "num_blocks": 65536, 00:10:07.687 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:07.687 "assigned_rate_limits": { 00:10:07.687 "rw_ios_per_sec": 0, 00:10:07.687 "rw_mbytes_per_sec": 0, 00:10:07.687 "r_mbytes_per_sec": 0, 00:10:07.687 "w_mbytes_per_sec": 0 00:10:07.687 }, 00:10:07.687 "claimed": true, 00:10:07.687 "claim_type": "exclusive_write", 00:10:07.687 "zoned": false, 00:10:07.687 "supported_io_types": { 00:10:07.687 "read": true, 00:10:07.687 "write": true, 00:10:07.687 "unmap": true, 00:10:07.687 "flush": true, 00:10:07.687 "reset": true, 00:10:07.687 "nvme_admin": false, 00:10:07.687 "nvme_io": false, 00:10:07.687 "nvme_io_md": false, 00:10:07.687 "write_zeroes": true, 00:10:07.687 "zcopy": true, 00:10:07.687 "get_zone_info": false, 00:10:07.687 "zone_management": false, 00:10:07.687 "zone_append": false, 00:10:07.687 "compare": false, 00:10:07.687 "compare_and_write": false, 00:10:07.687 "abort": true, 00:10:07.687 "seek_hole": false, 00:10:07.687 "seek_data": false, 00:10:07.687 "copy": true, 00:10:07.687 "nvme_iov_md": false 00:10:07.687 }, 00:10:07.687 "memory_domains": [ 00:10:07.687 { 00:10:07.687 "dma_device_id": "system", 00:10:07.687 "dma_device_type": 1 00:10:07.687 }, 00:10:07.687 { 00:10:07.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.687 "dma_device_type": 2 00:10:07.687 } 00:10:07.687 ], 00:10:07.687 "driver_specific": {} 00:10:07.687 } 00:10:07.687 ] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.687 "name": "Existed_Raid", 00:10:07.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.687 "strip_size_kb": 64, 00:10:07.687 "state": "configuring", 00:10:07.687 "raid_level": "raid0", 00:10:07.687 "superblock": false, 00:10:07.687 "num_base_bdevs": 4, 00:10:07.687 "num_base_bdevs_discovered": 3, 00:10:07.687 "num_base_bdevs_operational": 4, 00:10:07.687 "base_bdevs_list": [ 00:10:07.687 { 00:10:07.687 "name": "BaseBdev1", 00:10:07.687 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:07.687 "is_configured": true, 00:10:07.687 "data_offset": 0, 00:10:07.687 "data_size": 65536 00:10:07.687 }, 00:10:07.687 { 00:10:07.687 "name": null, 00:10:07.687 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:07.687 "is_configured": false, 00:10:07.687 "data_offset": 0, 00:10:07.687 "data_size": 65536 00:10:07.687 }, 00:10:07.687 { 00:10:07.687 "name": "BaseBdev3", 00:10:07.687 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:07.687 "is_configured": true, 00:10:07.687 "data_offset": 0, 00:10:07.687 "data_size": 65536 00:10:07.687 }, 00:10:07.687 { 00:10:07.687 "name": "BaseBdev4", 00:10:07.687 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:07.687 "is_configured": true, 00:10:07.687 "data_offset": 0, 00:10:07.687 "data_size": 65536 00:10:07.687 } 00:10:07.687 ] 00:10:07.687 }' 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.687 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.947 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.947 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.947 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.947 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.947 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.207 [2024-12-12 19:38:50.821145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.207 "name": "Existed_Raid", 00:10:08.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.207 "strip_size_kb": 64, 00:10:08.207 "state": "configuring", 00:10:08.207 "raid_level": "raid0", 00:10:08.207 "superblock": false, 00:10:08.207 "num_base_bdevs": 4, 00:10:08.207 "num_base_bdevs_discovered": 2, 00:10:08.207 "num_base_bdevs_operational": 4, 00:10:08.207 "base_bdevs_list": [ 00:10:08.207 { 00:10:08.207 "name": "BaseBdev1", 00:10:08.207 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:08.207 "is_configured": true, 00:10:08.207 "data_offset": 0, 00:10:08.207 "data_size": 65536 00:10:08.207 }, 00:10:08.207 { 00:10:08.207 "name": null, 00:10:08.207 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:08.207 "is_configured": false, 00:10:08.207 "data_offset": 0, 00:10:08.207 "data_size": 65536 00:10:08.207 }, 00:10:08.207 { 00:10:08.207 "name": null, 00:10:08.207 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:08.207 "is_configured": false, 00:10:08.207 "data_offset": 0, 00:10:08.207 "data_size": 65536 00:10:08.207 }, 00:10:08.207 { 00:10:08.207 "name": "BaseBdev4", 00:10:08.207 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:08.207 "is_configured": true, 00:10:08.207 "data_offset": 0, 00:10:08.207 "data_size": 65536 00:10:08.207 } 00:10:08.207 ] 00:10:08.207 }' 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.207 19:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.467 [2024-12-12 19:38:51.260439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.467 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.727 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.727 "name": "Existed_Raid", 00:10:08.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.727 "strip_size_kb": 64, 00:10:08.727 "state": "configuring", 00:10:08.727 "raid_level": "raid0", 00:10:08.727 "superblock": false, 00:10:08.727 "num_base_bdevs": 4, 00:10:08.727 "num_base_bdevs_discovered": 3, 00:10:08.727 "num_base_bdevs_operational": 4, 00:10:08.727 "base_bdevs_list": [ 00:10:08.727 { 00:10:08.727 "name": "BaseBdev1", 00:10:08.727 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:08.727 "is_configured": true, 00:10:08.727 "data_offset": 0, 00:10:08.727 "data_size": 65536 00:10:08.727 }, 00:10:08.727 { 00:10:08.727 "name": null, 00:10:08.727 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:08.727 "is_configured": false, 00:10:08.727 "data_offset": 0, 00:10:08.727 "data_size": 65536 00:10:08.727 }, 00:10:08.727 { 00:10:08.727 "name": "BaseBdev3", 00:10:08.727 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:08.727 "is_configured": true, 00:10:08.727 "data_offset": 0, 00:10:08.727 "data_size": 65536 00:10:08.727 }, 00:10:08.727 { 00:10:08.727 "name": "BaseBdev4", 00:10:08.727 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:08.727 "is_configured": true, 00:10:08.727 "data_offset": 0, 00:10:08.727 "data_size": 65536 00:10:08.727 } 00:10:08.727 ] 00:10:08.727 }' 00:10:08.727 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.727 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.986 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.986 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.986 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.986 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.987 [2024-12-12 19:38:51.695753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.987 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.247 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.247 "name": "Existed_Raid", 00:10:09.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.247 "strip_size_kb": 64, 00:10:09.247 "state": "configuring", 00:10:09.247 "raid_level": "raid0", 00:10:09.247 "superblock": false, 00:10:09.247 "num_base_bdevs": 4, 00:10:09.247 "num_base_bdevs_discovered": 2, 00:10:09.247 "num_base_bdevs_operational": 4, 00:10:09.247 "base_bdevs_list": [ 00:10:09.247 { 00:10:09.247 "name": null, 00:10:09.247 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:09.247 "is_configured": false, 00:10:09.247 "data_offset": 0, 00:10:09.247 "data_size": 65536 00:10:09.247 }, 00:10:09.247 { 00:10:09.247 "name": null, 00:10:09.247 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:09.247 "is_configured": false, 00:10:09.247 "data_offset": 0, 00:10:09.247 "data_size": 65536 00:10:09.247 }, 00:10:09.247 { 00:10:09.247 "name": "BaseBdev3", 00:10:09.247 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:09.247 "is_configured": true, 00:10:09.247 "data_offset": 0, 00:10:09.247 "data_size": 65536 00:10:09.247 }, 00:10:09.247 { 00:10:09.247 "name": "BaseBdev4", 00:10:09.247 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:09.247 "is_configured": true, 00:10:09.247 "data_offset": 0, 00:10:09.247 "data_size": 65536 00:10:09.247 } 00:10:09.247 ] 00:10:09.247 }' 00:10:09.247 19:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.247 19:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.506 [2024-12-12 19:38:52.272265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.506 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.507 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.507 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.507 "name": "Existed_Raid", 00:10:09.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.507 "strip_size_kb": 64, 00:10:09.507 "state": "configuring", 00:10:09.507 "raid_level": "raid0", 00:10:09.507 "superblock": false, 00:10:09.507 "num_base_bdevs": 4, 00:10:09.507 "num_base_bdevs_discovered": 3, 00:10:09.507 "num_base_bdevs_operational": 4, 00:10:09.507 "base_bdevs_list": [ 00:10:09.507 { 00:10:09.507 "name": null, 00:10:09.507 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:09.507 "is_configured": false, 00:10:09.507 "data_offset": 0, 00:10:09.507 "data_size": 65536 00:10:09.507 }, 00:10:09.507 { 00:10:09.507 "name": "BaseBdev2", 00:10:09.507 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:09.507 "is_configured": true, 00:10:09.507 "data_offset": 0, 00:10:09.507 "data_size": 65536 00:10:09.507 }, 00:10:09.507 { 00:10:09.507 "name": "BaseBdev3", 00:10:09.507 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:09.507 "is_configured": true, 00:10:09.507 "data_offset": 0, 00:10:09.507 "data_size": 65536 00:10:09.507 }, 00:10:09.507 { 00:10:09.507 "name": "BaseBdev4", 00:10:09.507 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:09.507 "is_configured": true, 00:10:09.507 "data_offset": 0, 00:10:09.507 "data_size": 65536 00:10:09.507 } 00:10:09.507 ] 00:10:09.507 }' 00:10:09.507 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.507 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc71eb61-5261-4774-99c7-1bede08ad2db 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 [2024-12-12 19:38:52.844518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.076 [2024-12-12 19:38:52.844593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.076 [2024-12-12 19:38:52.844601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:10.076 [2024-12-12 19:38:52.844887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:10.076 [2024-12-12 19:38:52.845093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.076 [2024-12-12 19:38:52.845107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:10.076 [2024-12-12 19:38:52.845359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.076 NewBaseBdev 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 [ 00:10:10.076 { 00:10:10.076 "name": "NewBaseBdev", 00:10:10.076 "aliases": [ 00:10:10.076 "bc71eb61-5261-4774-99c7-1bede08ad2db" 00:10:10.076 ], 00:10:10.076 "product_name": "Malloc disk", 00:10:10.076 "block_size": 512, 00:10:10.076 "num_blocks": 65536, 00:10:10.076 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:10.076 "assigned_rate_limits": { 00:10:10.076 "rw_ios_per_sec": 0, 00:10:10.076 "rw_mbytes_per_sec": 0, 00:10:10.076 "r_mbytes_per_sec": 0, 00:10:10.076 "w_mbytes_per_sec": 0 00:10:10.076 }, 00:10:10.076 "claimed": true, 00:10:10.076 "claim_type": "exclusive_write", 00:10:10.076 "zoned": false, 00:10:10.076 "supported_io_types": { 00:10:10.076 "read": true, 00:10:10.076 "write": true, 00:10:10.076 "unmap": true, 00:10:10.076 "flush": true, 00:10:10.076 "reset": true, 00:10:10.076 "nvme_admin": false, 00:10:10.076 "nvme_io": false, 00:10:10.076 "nvme_io_md": false, 00:10:10.076 "write_zeroes": true, 00:10:10.076 "zcopy": true, 00:10:10.077 "get_zone_info": false, 00:10:10.077 "zone_management": false, 00:10:10.077 "zone_append": false, 00:10:10.077 "compare": false, 00:10:10.077 "compare_and_write": false, 00:10:10.077 "abort": true, 00:10:10.077 "seek_hole": false, 00:10:10.077 "seek_data": false, 00:10:10.077 "copy": true, 00:10:10.077 "nvme_iov_md": false 00:10:10.077 }, 00:10:10.077 "memory_domains": [ 00:10:10.077 { 00:10:10.077 "dma_device_id": "system", 00:10:10.077 "dma_device_type": 1 00:10:10.077 }, 00:10:10.077 { 00:10:10.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.077 "dma_device_type": 2 00:10:10.077 } 00:10:10.077 ], 00:10:10.077 "driver_specific": {} 00:10:10.077 } 00:10:10.077 ] 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.077 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.337 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.337 "name": "Existed_Raid", 00:10:10.337 "uuid": "e302c215-37ec-4142-ba29-ecbc70fed09b", 00:10:10.337 "strip_size_kb": 64, 00:10:10.337 "state": "online", 00:10:10.337 "raid_level": "raid0", 00:10:10.337 "superblock": false, 00:10:10.337 "num_base_bdevs": 4, 00:10:10.337 "num_base_bdevs_discovered": 4, 00:10:10.337 "num_base_bdevs_operational": 4, 00:10:10.337 "base_bdevs_list": [ 00:10:10.337 { 00:10:10.337 "name": "NewBaseBdev", 00:10:10.337 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:10.337 "is_configured": true, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 65536 00:10:10.337 }, 00:10:10.337 { 00:10:10.337 "name": "BaseBdev2", 00:10:10.337 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:10.337 "is_configured": true, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 65536 00:10:10.337 }, 00:10:10.337 { 00:10:10.337 "name": "BaseBdev3", 00:10:10.337 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:10.337 "is_configured": true, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 65536 00:10:10.337 }, 00:10:10.337 { 00:10:10.337 "name": "BaseBdev4", 00:10:10.337 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:10.337 "is_configured": true, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 65536 00:10:10.337 } 00:10:10.337 ] 00:10:10.337 }' 00:10:10.337 19:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.337 19:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.597 [2024-12-12 19:38:53.336144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.597 "name": "Existed_Raid", 00:10:10.597 "aliases": [ 00:10:10.597 "e302c215-37ec-4142-ba29-ecbc70fed09b" 00:10:10.597 ], 00:10:10.597 "product_name": "Raid Volume", 00:10:10.597 "block_size": 512, 00:10:10.597 "num_blocks": 262144, 00:10:10.597 "uuid": "e302c215-37ec-4142-ba29-ecbc70fed09b", 00:10:10.597 "assigned_rate_limits": { 00:10:10.597 "rw_ios_per_sec": 0, 00:10:10.597 "rw_mbytes_per_sec": 0, 00:10:10.597 "r_mbytes_per_sec": 0, 00:10:10.597 "w_mbytes_per_sec": 0 00:10:10.597 }, 00:10:10.597 "claimed": false, 00:10:10.597 "zoned": false, 00:10:10.597 "supported_io_types": { 00:10:10.597 "read": true, 00:10:10.597 "write": true, 00:10:10.597 "unmap": true, 00:10:10.597 "flush": true, 00:10:10.597 "reset": true, 00:10:10.597 "nvme_admin": false, 00:10:10.597 "nvme_io": false, 00:10:10.597 "nvme_io_md": false, 00:10:10.597 "write_zeroes": true, 00:10:10.597 "zcopy": false, 00:10:10.597 "get_zone_info": false, 00:10:10.597 "zone_management": false, 00:10:10.597 "zone_append": false, 00:10:10.597 "compare": false, 00:10:10.597 "compare_and_write": false, 00:10:10.597 "abort": false, 00:10:10.597 "seek_hole": false, 00:10:10.597 "seek_data": false, 00:10:10.597 "copy": false, 00:10:10.597 "nvme_iov_md": false 00:10:10.597 }, 00:10:10.597 "memory_domains": [ 00:10:10.597 { 00:10:10.597 "dma_device_id": "system", 00:10:10.597 "dma_device_type": 1 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.597 "dma_device_type": 2 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "system", 00:10:10.597 "dma_device_type": 1 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.597 "dma_device_type": 2 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "system", 00:10:10.597 "dma_device_type": 1 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.597 "dma_device_type": 2 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "system", 00:10:10.597 "dma_device_type": 1 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.597 "dma_device_type": 2 00:10:10.597 } 00:10:10.597 ], 00:10:10.597 "driver_specific": { 00:10:10.597 "raid": { 00:10:10.597 "uuid": "e302c215-37ec-4142-ba29-ecbc70fed09b", 00:10:10.597 "strip_size_kb": 64, 00:10:10.597 "state": "online", 00:10:10.597 "raid_level": "raid0", 00:10:10.597 "superblock": false, 00:10:10.597 "num_base_bdevs": 4, 00:10:10.597 "num_base_bdevs_discovered": 4, 00:10:10.597 "num_base_bdevs_operational": 4, 00:10:10.597 "base_bdevs_list": [ 00:10:10.597 { 00:10:10.597 "name": "NewBaseBdev", 00:10:10.597 "uuid": "bc71eb61-5261-4774-99c7-1bede08ad2db", 00:10:10.597 "is_configured": true, 00:10:10.597 "data_offset": 0, 00:10:10.597 "data_size": 65536 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "name": "BaseBdev2", 00:10:10.597 "uuid": "3011f463-f0d4-48e4-a037-a16ae1005325", 00:10:10.597 "is_configured": true, 00:10:10.597 "data_offset": 0, 00:10:10.597 "data_size": 65536 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "name": "BaseBdev3", 00:10:10.597 "uuid": "fd36f860-7726-4c36-85ea-5ab1531bc584", 00:10:10.597 "is_configured": true, 00:10:10.597 "data_offset": 0, 00:10:10.597 "data_size": 65536 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "name": "BaseBdev4", 00:10:10.597 "uuid": "678083d4-aea5-4e8e-9090-f944e9fc5de7", 00:10:10.597 "is_configured": true, 00:10:10.597 "data_offset": 0, 00:10:10.597 "data_size": 65536 00:10:10.597 } 00:10:10.597 ] 00:10:10.597 } 00:10:10.597 } 00:10:10.597 }' 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.597 BaseBdev2 00:10:10.597 BaseBdev3 00:10:10.597 BaseBdev4' 00:10:10.597 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.858 [2024-12-12 19:38:53.603242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.858 [2024-12-12 19:38:53.603362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.858 [2024-12-12 19:38:53.603512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.858 [2024-12-12 19:38:53.603640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.858 [2024-12-12 19:38:53.603687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71075 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71075 ']' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71075 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71075 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71075' 00:10:10.858 killing process with pid 71075 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71075 00:10:10.858 [2024-12-12 19:38:53.650199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.858 19:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71075 00:10:11.427 [2024-12-12 19:38:54.075393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.811 ************************************ 00:10:12.811 END TEST raid_state_function_test 00:10:12.811 ************************************ 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.811 00:10:12.811 real 0m11.330s 00:10:12.811 user 0m17.665s 00:10:12.811 sys 0m2.098s 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 19:38:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:12.811 19:38:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.811 19:38:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.811 19:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 ************************************ 00:10:12.811 START TEST raid_state_function_test_sb 00:10:12.811 ************************************ 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71741 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71741' 00:10:12.811 Process raid pid: 71741 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71741 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71741 ']' 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.811 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.812 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.812 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.812 19:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 [2024-12-12 19:38:55.439082] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:12.812 [2024-12-12 19:38:55.439207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.812 [2024-12-12 19:38:55.612716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.071 [2024-12-12 19:38:55.743896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.331 [2024-12-12 19:38:55.985872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.331 [2024-12-12 19:38:55.985913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 [2024-12-12 19:38:56.273370] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.591 [2024-12-12 19:38:56.273440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.591 [2024-12-12 19:38:56.273451] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.591 [2024-12-12 19:38:56.273461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.591 [2024-12-12 19:38:56.273467] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.591 [2024-12-12 19:38:56.273478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.591 [2024-12-12 19:38:56.273484] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.591 [2024-12-12 19:38:56.273494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.591 "name": "Existed_Raid", 00:10:13.591 "uuid": "c7cd2715-b8bb-4e56-8430-091e09725488", 00:10:13.591 "strip_size_kb": 64, 00:10:13.591 "state": "configuring", 00:10:13.591 "raid_level": "raid0", 00:10:13.591 "superblock": true, 00:10:13.591 "num_base_bdevs": 4, 00:10:13.591 "num_base_bdevs_discovered": 0, 00:10:13.591 "num_base_bdevs_operational": 4, 00:10:13.591 "base_bdevs_list": [ 00:10:13.591 { 00:10:13.591 "name": "BaseBdev1", 00:10:13.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.591 "is_configured": false, 00:10:13.591 "data_offset": 0, 00:10:13.591 "data_size": 0 00:10:13.591 }, 00:10:13.591 { 00:10:13.591 "name": "BaseBdev2", 00:10:13.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.591 "is_configured": false, 00:10:13.591 "data_offset": 0, 00:10:13.591 "data_size": 0 00:10:13.591 }, 00:10:13.591 { 00:10:13.591 "name": "BaseBdev3", 00:10:13.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.591 "is_configured": false, 00:10:13.591 "data_offset": 0, 00:10:13.591 "data_size": 0 00:10:13.591 }, 00:10:13.591 { 00:10:13.591 "name": "BaseBdev4", 00:10:13.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.591 "is_configured": false, 00:10:13.591 "data_offset": 0, 00:10:13.591 "data_size": 0 00:10:13.591 } 00:10:13.591 ] 00:10:13.591 }' 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.591 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 [2024-12-12 19:38:56.708557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.161 [2024-12-12 19:38:56.708611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 [2024-12-12 19:38:56.716543] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.161 [2024-12-12 19:38:56.716594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.161 [2024-12-12 19:38:56.716604] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.161 [2024-12-12 19:38:56.716613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.161 [2024-12-12 19:38:56.716619] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.161 [2024-12-12 19:38:56.716629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.161 [2024-12-12 19:38:56.716635] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.161 [2024-12-12 19:38:56.716643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 [2024-12-12 19:38:56.771394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.161 BaseBdev1 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.161 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 [ 00:10:14.161 { 00:10:14.161 "name": "BaseBdev1", 00:10:14.161 "aliases": [ 00:10:14.161 "0c391022-74eb-42bb-a4d6-77ded32d300a" 00:10:14.161 ], 00:10:14.161 "product_name": "Malloc disk", 00:10:14.161 "block_size": 512, 00:10:14.161 "num_blocks": 65536, 00:10:14.161 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:14.161 "assigned_rate_limits": { 00:10:14.161 "rw_ios_per_sec": 0, 00:10:14.161 "rw_mbytes_per_sec": 0, 00:10:14.161 "r_mbytes_per_sec": 0, 00:10:14.161 "w_mbytes_per_sec": 0 00:10:14.161 }, 00:10:14.161 "claimed": true, 00:10:14.161 "claim_type": "exclusive_write", 00:10:14.161 "zoned": false, 00:10:14.161 "supported_io_types": { 00:10:14.161 "read": true, 00:10:14.161 "write": true, 00:10:14.161 "unmap": true, 00:10:14.161 "flush": true, 00:10:14.161 "reset": true, 00:10:14.161 "nvme_admin": false, 00:10:14.161 "nvme_io": false, 00:10:14.161 "nvme_io_md": false, 00:10:14.161 "write_zeroes": true, 00:10:14.161 "zcopy": true, 00:10:14.161 "get_zone_info": false, 00:10:14.161 "zone_management": false, 00:10:14.161 "zone_append": false, 00:10:14.161 "compare": false, 00:10:14.161 "compare_and_write": false, 00:10:14.161 "abort": true, 00:10:14.161 "seek_hole": false, 00:10:14.161 "seek_data": false, 00:10:14.161 "copy": true, 00:10:14.161 "nvme_iov_md": false 00:10:14.161 }, 00:10:14.161 "memory_domains": [ 00:10:14.162 { 00:10:14.162 "dma_device_id": "system", 00:10:14.162 "dma_device_type": 1 00:10:14.162 }, 00:10:14.162 { 00:10:14.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.162 "dma_device_type": 2 00:10:14.162 } 00:10:14.162 ], 00:10:14.162 "driver_specific": {} 00:10:14.162 } 00:10:14.162 ] 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.162 "name": "Existed_Raid", 00:10:14.162 "uuid": "af5f1179-9f2e-4623-b1c4-1fae278338a4", 00:10:14.162 "strip_size_kb": 64, 00:10:14.162 "state": "configuring", 00:10:14.162 "raid_level": "raid0", 00:10:14.162 "superblock": true, 00:10:14.162 "num_base_bdevs": 4, 00:10:14.162 "num_base_bdevs_discovered": 1, 00:10:14.162 "num_base_bdevs_operational": 4, 00:10:14.162 "base_bdevs_list": [ 00:10:14.162 { 00:10:14.162 "name": "BaseBdev1", 00:10:14.162 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:14.162 "is_configured": true, 00:10:14.162 "data_offset": 2048, 00:10:14.162 "data_size": 63488 00:10:14.162 }, 00:10:14.162 { 00:10:14.162 "name": "BaseBdev2", 00:10:14.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.162 "is_configured": false, 00:10:14.162 "data_offset": 0, 00:10:14.162 "data_size": 0 00:10:14.162 }, 00:10:14.162 { 00:10:14.162 "name": "BaseBdev3", 00:10:14.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.162 "is_configured": false, 00:10:14.162 "data_offset": 0, 00:10:14.162 "data_size": 0 00:10:14.162 }, 00:10:14.162 { 00:10:14.162 "name": "BaseBdev4", 00:10:14.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.162 "is_configured": false, 00:10:14.162 "data_offset": 0, 00:10:14.162 "data_size": 0 00:10:14.162 } 00:10:14.162 ] 00:10:14.162 }' 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.162 19:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.731 [2024-12-12 19:38:57.282587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.731 [2024-12-12 19:38:57.282658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.731 [2024-12-12 19:38:57.294625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.731 [2024-12-12 19:38:57.296762] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.731 [2024-12-12 19:38:57.296804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.731 [2024-12-12 19:38:57.296814] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.731 [2024-12-12 19:38:57.296824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.731 [2024-12-12 19:38:57.296830] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.731 [2024-12-12 19:38:57.296838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.731 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.731 "name": "Existed_Raid", 00:10:14.731 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:14.731 "strip_size_kb": 64, 00:10:14.731 "state": "configuring", 00:10:14.731 "raid_level": "raid0", 00:10:14.731 "superblock": true, 00:10:14.731 "num_base_bdevs": 4, 00:10:14.731 "num_base_bdevs_discovered": 1, 00:10:14.731 "num_base_bdevs_operational": 4, 00:10:14.731 "base_bdevs_list": [ 00:10:14.731 { 00:10:14.731 "name": "BaseBdev1", 00:10:14.731 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:14.731 "is_configured": true, 00:10:14.731 "data_offset": 2048, 00:10:14.731 "data_size": 63488 00:10:14.731 }, 00:10:14.731 { 00:10:14.731 "name": "BaseBdev2", 00:10:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.731 "is_configured": false, 00:10:14.731 "data_offset": 0, 00:10:14.731 "data_size": 0 00:10:14.731 }, 00:10:14.732 { 00:10:14.732 "name": "BaseBdev3", 00:10:14.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.732 "is_configured": false, 00:10:14.732 "data_offset": 0, 00:10:14.732 "data_size": 0 00:10:14.732 }, 00:10:14.732 { 00:10:14.732 "name": "BaseBdev4", 00:10:14.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.732 "is_configured": false, 00:10:14.732 "data_offset": 0, 00:10:14.732 "data_size": 0 00:10:14.732 } 00:10:14.732 ] 00:10:14.732 }' 00:10:14.732 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.732 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.991 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.992 [2024-12-12 19:38:57.737011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.992 BaseBdev2 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.992 [ 00:10:14.992 { 00:10:14.992 "name": "BaseBdev2", 00:10:14.992 "aliases": [ 00:10:14.992 "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16" 00:10:14.992 ], 00:10:14.992 "product_name": "Malloc disk", 00:10:14.992 "block_size": 512, 00:10:14.992 "num_blocks": 65536, 00:10:14.992 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:14.992 "assigned_rate_limits": { 00:10:14.992 "rw_ios_per_sec": 0, 00:10:14.992 "rw_mbytes_per_sec": 0, 00:10:14.992 "r_mbytes_per_sec": 0, 00:10:14.992 "w_mbytes_per_sec": 0 00:10:14.992 }, 00:10:14.992 "claimed": true, 00:10:14.992 "claim_type": "exclusive_write", 00:10:14.992 "zoned": false, 00:10:14.992 "supported_io_types": { 00:10:14.992 "read": true, 00:10:14.992 "write": true, 00:10:14.992 "unmap": true, 00:10:14.992 "flush": true, 00:10:14.992 "reset": true, 00:10:14.992 "nvme_admin": false, 00:10:14.992 "nvme_io": false, 00:10:14.992 "nvme_io_md": false, 00:10:14.992 "write_zeroes": true, 00:10:14.992 "zcopy": true, 00:10:14.992 "get_zone_info": false, 00:10:14.992 "zone_management": false, 00:10:14.992 "zone_append": false, 00:10:14.992 "compare": false, 00:10:14.992 "compare_and_write": false, 00:10:14.992 "abort": true, 00:10:14.992 "seek_hole": false, 00:10:14.992 "seek_data": false, 00:10:14.992 "copy": true, 00:10:14.992 "nvme_iov_md": false 00:10:14.992 }, 00:10:14.992 "memory_domains": [ 00:10:14.992 { 00:10:14.992 "dma_device_id": "system", 00:10:14.992 "dma_device_type": 1 00:10:14.992 }, 00:10:14.992 { 00:10:14.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.992 "dma_device_type": 2 00:10:14.992 } 00:10:14.992 ], 00:10:14.992 "driver_specific": {} 00:10:14.992 } 00:10:14.992 ] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.992 "name": "Existed_Raid", 00:10:14.992 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:14.992 "strip_size_kb": 64, 00:10:14.992 "state": "configuring", 00:10:14.992 "raid_level": "raid0", 00:10:14.992 "superblock": true, 00:10:14.992 "num_base_bdevs": 4, 00:10:14.992 "num_base_bdevs_discovered": 2, 00:10:14.992 "num_base_bdevs_operational": 4, 00:10:14.992 "base_bdevs_list": [ 00:10:14.992 { 00:10:14.992 "name": "BaseBdev1", 00:10:14.992 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:14.992 "is_configured": true, 00:10:14.992 "data_offset": 2048, 00:10:14.992 "data_size": 63488 00:10:14.992 }, 00:10:14.992 { 00:10:14.992 "name": "BaseBdev2", 00:10:14.992 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:14.992 "is_configured": true, 00:10:14.992 "data_offset": 2048, 00:10:14.992 "data_size": 63488 00:10:14.992 }, 00:10:14.992 { 00:10:14.992 "name": "BaseBdev3", 00:10:14.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.992 "is_configured": false, 00:10:14.992 "data_offset": 0, 00:10:14.992 "data_size": 0 00:10:14.992 }, 00:10:14.992 { 00:10:14.992 "name": "BaseBdev4", 00:10:14.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.992 "is_configured": false, 00:10:14.992 "data_offset": 0, 00:10:14.992 "data_size": 0 00:10:14.992 } 00:10:14.992 ] 00:10:14.992 }' 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.992 19:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.561 [2024-12-12 19:38:58.292631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.561 BaseBdev3 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.561 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.561 [ 00:10:15.561 { 00:10:15.561 "name": "BaseBdev3", 00:10:15.561 "aliases": [ 00:10:15.561 "bd9981c2-002b-4fec-8272-9dcbc448138d" 00:10:15.561 ], 00:10:15.561 "product_name": "Malloc disk", 00:10:15.561 "block_size": 512, 00:10:15.561 "num_blocks": 65536, 00:10:15.561 "uuid": "bd9981c2-002b-4fec-8272-9dcbc448138d", 00:10:15.561 "assigned_rate_limits": { 00:10:15.561 "rw_ios_per_sec": 0, 00:10:15.561 "rw_mbytes_per_sec": 0, 00:10:15.561 "r_mbytes_per_sec": 0, 00:10:15.561 "w_mbytes_per_sec": 0 00:10:15.561 }, 00:10:15.561 "claimed": true, 00:10:15.561 "claim_type": "exclusive_write", 00:10:15.561 "zoned": false, 00:10:15.561 "supported_io_types": { 00:10:15.562 "read": true, 00:10:15.562 "write": true, 00:10:15.562 "unmap": true, 00:10:15.562 "flush": true, 00:10:15.562 "reset": true, 00:10:15.562 "nvme_admin": false, 00:10:15.562 "nvme_io": false, 00:10:15.562 "nvme_io_md": false, 00:10:15.562 "write_zeroes": true, 00:10:15.562 "zcopy": true, 00:10:15.562 "get_zone_info": false, 00:10:15.562 "zone_management": false, 00:10:15.562 "zone_append": false, 00:10:15.562 "compare": false, 00:10:15.562 "compare_and_write": false, 00:10:15.562 "abort": true, 00:10:15.562 "seek_hole": false, 00:10:15.562 "seek_data": false, 00:10:15.562 "copy": true, 00:10:15.562 "nvme_iov_md": false 00:10:15.562 }, 00:10:15.562 "memory_domains": [ 00:10:15.562 { 00:10:15.562 "dma_device_id": "system", 00:10:15.562 "dma_device_type": 1 00:10:15.562 }, 00:10:15.562 { 00:10:15.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.562 "dma_device_type": 2 00:10:15.562 } 00:10:15.562 ], 00:10:15.562 "driver_specific": {} 00:10:15.562 } 00:10:15.562 ] 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.562 "name": "Existed_Raid", 00:10:15.562 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:15.562 "strip_size_kb": 64, 00:10:15.562 "state": "configuring", 00:10:15.562 "raid_level": "raid0", 00:10:15.562 "superblock": true, 00:10:15.562 "num_base_bdevs": 4, 00:10:15.562 "num_base_bdevs_discovered": 3, 00:10:15.562 "num_base_bdevs_operational": 4, 00:10:15.562 "base_bdevs_list": [ 00:10:15.562 { 00:10:15.562 "name": "BaseBdev1", 00:10:15.562 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:15.562 "is_configured": true, 00:10:15.562 "data_offset": 2048, 00:10:15.562 "data_size": 63488 00:10:15.562 }, 00:10:15.562 { 00:10:15.562 "name": "BaseBdev2", 00:10:15.562 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:15.562 "is_configured": true, 00:10:15.562 "data_offset": 2048, 00:10:15.562 "data_size": 63488 00:10:15.562 }, 00:10:15.562 { 00:10:15.562 "name": "BaseBdev3", 00:10:15.562 "uuid": "bd9981c2-002b-4fec-8272-9dcbc448138d", 00:10:15.562 "is_configured": true, 00:10:15.562 "data_offset": 2048, 00:10:15.562 "data_size": 63488 00:10:15.562 }, 00:10:15.562 { 00:10:15.562 "name": "BaseBdev4", 00:10:15.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.562 "is_configured": false, 00:10:15.562 "data_offset": 0, 00:10:15.562 "data_size": 0 00:10:15.562 } 00:10:15.562 ] 00:10:15.562 }' 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.562 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 [2024-12-12 19:38:58.762835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.131 [2024-12-12 19:38:58.763225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.131 [2024-12-12 19:38:58.763250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.131 [2024-12-12 19:38:58.763589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:16.131 BaseBdev4 00:10:16.131 [2024-12-12 19:38:58.763781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.131 [2024-12-12 19:38:58.763800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.131 [2024-12-12 19:38:58.763969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 [ 00:10:16.131 { 00:10:16.131 "name": "BaseBdev4", 00:10:16.131 "aliases": [ 00:10:16.131 "87b99561-5581-473c-99b3-82d5706589ae" 00:10:16.131 ], 00:10:16.131 "product_name": "Malloc disk", 00:10:16.131 "block_size": 512, 00:10:16.131 "num_blocks": 65536, 00:10:16.131 "uuid": "87b99561-5581-473c-99b3-82d5706589ae", 00:10:16.131 "assigned_rate_limits": { 00:10:16.131 "rw_ios_per_sec": 0, 00:10:16.131 "rw_mbytes_per_sec": 0, 00:10:16.131 "r_mbytes_per_sec": 0, 00:10:16.131 "w_mbytes_per_sec": 0 00:10:16.131 }, 00:10:16.131 "claimed": true, 00:10:16.131 "claim_type": "exclusive_write", 00:10:16.131 "zoned": false, 00:10:16.131 "supported_io_types": { 00:10:16.131 "read": true, 00:10:16.131 "write": true, 00:10:16.131 "unmap": true, 00:10:16.131 "flush": true, 00:10:16.131 "reset": true, 00:10:16.131 "nvme_admin": false, 00:10:16.131 "nvme_io": false, 00:10:16.131 "nvme_io_md": false, 00:10:16.131 "write_zeroes": true, 00:10:16.131 "zcopy": true, 00:10:16.131 "get_zone_info": false, 00:10:16.131 "zone_management": false, 00:10:16.131 "zone_append": false, 00:10:16.131 "compare": false, 00:10:16.131 "compare_and_write": false, 00:10:16.131 "abort": true, 00:10:16.131 "seek_hole": false, 00:10:16.131 "seek_data": false, 00:10:16.131 "copy": true, 00:10:16.131 "nvme_iov_md": false 00:10:16.131 }, 00:10:16.131 "memory_domains": [ 00:10:16.131 { 00:10:16.131 "dma_device_id": "system", 00:10:16.131 "dma_device_type": 1 00:10:16.131 }, 00:10:16.131 { 00:10:16.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.131 "dma_device_type": 2 00:10:16.131 } 00:10:16.131 ], 00:10:16.131 "driver_specific": {} 00:10:16.131 } 00:10:16.131 ] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.131 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.131 "name": "Existed_Raid", 00:10:16.131 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:16.131 "strip_size_kb": 64, 00:10:16.131 "state": "online", 00:10:16.131 "raid_level": "raid0", 00:10:16.131 "superblock": true, 00:10:16.131 "num_base_bdevs": 4, 00:10:16.131 "num_base_bdevs_discovered": 4, 00:10:16.131 "num_base_bdevs_operational": 4, 00:10:16.131 "base_bdevs_list": [ 00:10:16.131 { 00:10:16.131 "name": "BaseBdev1", 00:10:16.131 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:16.131 "is_configured": true, 00:10:16.131 "data_offset": 2048, 00:10:16.131 "data_size": 63488 00:10:16.131 }, 00:10:16.131 { 00:10:16.131 "name": "BaseBdev2", 00:10:16.131 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:16.131 "is_configured": true, 00:10:16.131 "data_offset": 2048, 00:10:16.131 "data_size": 63488 00:10:16.131 }, 00:10:16.132 { 00:10:16.132 "name": "BaseBdev3", 00:10:16.132 "uuid": "bd9981c2-002b-4fec-8272-9dcbc448138d", 00:10:16.132 "is_configured": true, 00:10:16.132 "data_offset": 2048, 00:10:16.132 "data_size": 63488 00:10:16.132 }, 00:10:16.132 { 00:10:16.132 "name": "BaseBdev4", 00:10:16.132 "uuid": "87b99561-5581-473c-99b3-82d5706589ae", 00:10:16.132 "is_configured": true, 00:10:16.132 "data_offset": 2048, 00:10:16.132 "data_size": 63488 00:10:16.132 } 00:10:16.132 ] 00:10:16.132 }' 00:10:16.132 19:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.132 19:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.707 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.708 [2024-12-12 19:38:59.262411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.708 "name": "Existed_Raid", 00:10:16.708 "aliases": [ 00:10:16.708 "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4" 00:10:16.708 ], 00:10:16.708 "product_name": "Raid Volume", 00:10:16.708 "block_size": 512, 00:10:16.708 "num_blocks": 253952, 00:10:16.708 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:16.708 "assigned_rate_limits": { 00:10:16.708 "rw_ios_per_sec": 0, 00:10:16.708 "rw_mbytes_per_sec": 0, 00:10:16.708 "r_mbytes_per_sec": 0, 00:10:16.708 "w_mbytes_per_sec": 0 00:10:16.708 }, 00:10:16.708 "claimed": false, 00:10:16.708 "zoned": false, 00:10:16.708 "supported_io_types": { 00:10:16.708 "read": true, 00:10:16.708 "write": true, 00:10:16.708 "unmap": true, 00:10:16.708 "flush": true, 00:10:16.708 "reset": true, 00:10:16.708 "nvme_admin": false, 00:10:16.708 "nvme_io": false, 00:10:16.708 "nvme_io_md": false, 00:10:16.708 "write_zeroes": true, 00:10:16.708 "zcopy": false, 00:10:16.708 "get_zone_info": false, 00:10:16.708 "zone_management": false, 00:10:16.708 "zone_append": false, 00:10:16.708 "compare": false, 00:10:16.708 "compare_and_write": false, 00:10:16.708 "abort": false, 00:10:16.708 "seek_hole": false, 00:10:16.708 "seek_data": false, 00:10:16.708 "copy": false, 00:10:16.708 "nvme_iov_md": false 00:10:16.708 }, 00:10:16.708 "memory_domains": [ 00:10:16.708 { 00:10:16.708 "dma_device_id": "system", 00:10:16.708 "dma_device_type": 1 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.708 "dma_device_type": 2 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "system", 00:10:16.708 "dma_device_type": 1 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.708 "dma_device_type": 2 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "system", 00:10:16.708 "dma_device_type": 1 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.708 "dma_device_type": 2 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "system", 00:10:16.708 "dma_device_type": 1 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.708 "dma_device_type": 2 00:10:16.708 } 00:10:16.708 ], 00:10:16.708 "driver_specific": { 00:10:16.708 "raid": { 00:10:16.708 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:16.708 "strip_size_kb": 64, 00:10:16.708 "state": "online", 00:10:16.708 "raid_level": "raid0", 00:10:16.708 "superblock": true, 00:10:16.708 "num_base_bdevs": 4, 00:10:16.708 "num_base_bdevs_discovered": 4, 00:10:16.708 "num_base_bdevs_operational": 4, 00:10:16.708 "base_bdevs_list": [ 00:10:16.708 { 00:10:16.708 "name": "BaseBdev1", 00:10:16.708 "uuid": "0c391022-74eb-42bb-a4d6-77ded32d300a", 00:10:16.708 "is_configured": true, 00:10:16.708 "data_offset": 2048, 00:10:16.708 "data_size": 63488 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "name": "BaseBdev2", 00:10:16.708 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:16.708 "is_configured": true, 00:10:16.708 "data_offset": 2048, 00:10:16.708 "data_size": 63488 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "name": "BaseBdev3", 00:10:16.708 "uuid": "bd9981c2-002b-4fec-8272-9dcbc448138d", 00:10:16.708 "is_configured": true, 00:10:16.708 "data_offset": 2048, 00:10:16.708 "data_size": 63488 00:10:16.708 }, 00:10:16.708 { 00:10:16.708 "name": "BaseBdev4", 00:10:16.708 "uuid": "87b99561-5581-473c-99b3-82d5706589ae", 00:10:16.708 "is_configured": true, 00:10:16.708 "data_offset": 2048, 00:10:16.708 "data_size": 63488 00:10:16.708 } 00:10:16.708 ] 00:10:16.708 } 00:10:16.708 } 00:10:16.708 }' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.708 BaseBdev2 00:10:16.708 BaseBdev3 00:10:16.708 BaseBdev4' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.708 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.981 [2024-12-12 19:38:59.569769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.981 [2024-12-12 19:38:59.569807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.981 [2024-12-12 19:38:59.569865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.981 "name": "Existed_Raid", 00:10:16.981 "uuid": "926ab7c5-a15c-4f13-9b48-eb9ab7afc0e4", 00:10:16.981 "strip_size_kb": 64, 00:10:16.981 "state": "offline", 00:10:16.981 "raid_level": "raid0", 00:10:16.981 "superblock": true, 00:10:16.981 "num_base_bdevs": 4, 00:10:16.981 "num_base_bdevs_discovered": 3, 00:10:16.981 "num_base_bdevs_operational": 3, 00:10:16.981 "base_bdevs_list": [ 00:10:16.981 { 00:10:16.981 "name": null, 00:10:16.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.981 "is_configured": false, 00:10:16.981 "data_offset": 0, 00:10:16.981 "data_size": 63488 00:10:16.981 }, 00:10:16.981 { 00:10:16.981 "name": "BaseBdev2", 00:10:16.981 "uuid": "ddf44ac8-cc0d-4e7e-abaa-9841d936ac16", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 }, 00:10:16.981 { 00:10:16.981 "name": "BaseBdev3", 00:10:16.981 "uuid": "bd9981c2-002b-4fec-8272-9dcbc448138d", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 }, 00:10:16.981 { 00:10:16.981 "name": "BaseBdev4", 00:10:16.981 "uuid": "87b99561-5581-473c-99b3-82d5706589ae", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 } 00:10:16.981 ] 00:10:16.981 }' 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.981 19:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.241 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.501 [2024-12-12 19:39:00.134104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.501 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.502 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.502 [2024-12-12 19:39:00.287394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 [2024-12-12 19:39:00.450544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:17.761 [2024-12-12 19:39:00.450626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 BaseBdev2 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 [ 00:10:18.022 { 00:10:18.022 "name": "BaseBdev2", 00:10:18.022 "aliases": [ 00:10:18.022 "11427a90-cd57-4164-8b8b-9ea2caa7ef11" 00:10:18.022 ], 00:10:18.022 "product_name": "Malloc disk", 00:10:18.022 "block_size": 512, 00:10:18.022 "num_blocks": 65536, 00:10:18.022 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:18.022 "assigned_rate_limits": { 00:10:18.022 "rw_ios_per_sec": 0, 00:10:18.022 "rw_mbytes_per_sec": 0, 00:10:18.022 "r_mbytes_per_sec": 0, 00:10:18.022 "w_mbytes_per_sec": 0 00:10:18.022 }, 00:10:18.022 "claimed": false, 00:10:18.022 "zoned": false, 00:10:18.022 "supported_io_types": { 00:10:18.022 "read": true, 00:10:18.022 "write": true, 00:10:18.022 "unmap": true, 00:10:18.022 "flush": true, 00:10:18.022 "reset": true, 00:10:18.022 "nvme_admin": false, 00:10:18.022 "nvme_io": false, 00:10:18.022 "nvme_io_md": false, 00:10:18.022 "write_zeroes": true, 00:10:18.022 "zcopy": true, 00:10:18.022 "get_zone_info": false, 00:10:18.022 "zone_management": false, 00:10:18.022 "zone_append": false, 00:10:18.022 "compare": false, 00:10:18.022 "compare_and_write": false, 00:10:18.022 "abort": true, 00:10:18.022 "seek_hole": false, 00:10:18.022 "seek_data": false, 00:10:18.022 "copy": true, 00:10:18.022 "nvme_iov_md": false 00:10:18.022 }, 00:10:18.022 "memory_domains": [ 00:10:18.022 { 00:10:18.022 "dma_device_id": "system", 00:10:18.022 "dma_device_type": 1 00:10:18.022 }, 00:10:18.022 { 00:10:18.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.022 "dma_device_type": 2 00:10:18.022 } 00:10:18.022 ], 00:10:18.022 "driver_specific": {} 00:10:18.022 } 00:10:18.022 ] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 BaseBdev3 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.022 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.022 [ 00:10:18.022 { 00:10:18.022 "name": "BaseBdev3", 00:10:18.022 "aliases": [ 00:10:18.022 "ef06c464-4c9c-4aae-9401-616aa73ccc51" 00:10:18.022 ], 00:10:18.022 "product_name": "Malloc disk", 00:10:18.022 "block_size": 512, 00:10:18.022 "num_blocks": 65536, 00:10:18.022 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:18.022 "assigned_rate_limits": { 00:10:18.022 "rw_ios_per_sec": 0, 00:10:18.022 "rw_mbytes_per_sec": 0, 00:10:18.022 "r_mbytes_per_sec": 0, 00:10:18.022 "w_mbytes_per_sec": 0 00:10:18.022 }, 00:10:18.022 "claimed": false, 00:10:18.022 "zoned": false, 00:10:18.022 "supported_io_types": { 00:10:18.022 "read": true, 00:10:18.022 "write": true, 00:10:18.022 "unmap": true, 00:10:18.022 "flush": true, 00:10:18.022 "reset": true, 00:10:18.022 "nvme_admin": false, 00:10:18.022 "nvme_io": false, 00:10:18.022 "nvme_io_md": false, 00:10:18.022 "write_zeroes": true, 00:10:18.022 "zcopy": true, 00:10:18.022 "get_zone_info": false, 00:10:18.022 "zone_management": false, 00:10:18.022 "zone_append": false, 00:10:18.022 "compare": false, 00:10:18.022 "compare_and_write": false, 00:10:18.022 "abort": true, 00:10:18.022 "seek_hole": false, 00:10:18.022 "seek_data": false, 00:10:18.022 "copy": true, 00:10:18.022 "nvme_iov_md": false 00:10:18.023 }, 00:10:18.023 "memory_domains": [ 00:10:18.023 { 00:10:18.023 "dma_device_id": "system", 00:10:18.023 "dma_device_type": 1 00:10:18.023 }, 00:10:18.023 { 00:10:18.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.023 "dma_device_type": 2 00:10:18.023 } 00:10:18.023 ], 00:10:18.023 "driver_specific": {} 00:10:18.023 } 00:10:18.023 ] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.023 BaseBdev4 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.023 [ 00:10:18.023 { 00:10:18.023 "name": "BaseBdev4", 00:10:18.023 "aliases": [ 00:10:18.023 "f151d7c0-893c-4544-85d3-29ab83409c27" 00:10:18.023 ], 00:10:18.023 "product_name": "Malloc disk", 00:10:18.023 "block_size": 512, 00:10:18.023 "num_blocks": 65536, 00:10:18.023 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:18.023 "assigned_rate_limits": { 00:10:18.023 "rw_ios_per_sec": 0, 00:10:18.023 "rw_mbytes_per_sec": 0, 00:10:18.023 "r_mbytes_per_sec": 0, 00:10:18.023 "w_mbytes_per_sec": 0 00:10:18.023 }, 00:10:18.023 "claimed": false, 00:10:18.023 "zoned": false, 00:10:18.023 "supported_io_types": { 00:10:18.023 "read": true, 00:10:18.023 "write": true, 00:10:18.023 "unmap": true, 00:10:18.023 "flush": true, 00:10:18.023 "reset": true, 00:10:18.023 "nvme_admin": false, 00:10:18.023 "nvme_io": false, 00:10:18.023 "nvme_io_md": false, 00:10:18.023 "write_zeroes": true, 00:10:18.023 "zcopy": true, 00:10:18.023 "get_zone_info": false, 00:10:18.023 "zone_management": false, 00:10:18.023 "zone_append": false, 00:10:18.023 "compare": false, 00:10:18.023 "compare_and_write": false, 00:10:18.023 "abort": true, 00:10:18.023 "seek_hole": false, 00:10:18.023 "seek_data": false, 00:10:18.023 "copy": true, 00:10:18.023 "nvme_iov_md": false 00:10:18.023 }, 00:10:18.023 "memory_domains": [ 00:10:18.023 { 00:10:18.023 "dma_device_id": "system", 00:10:18.023 "dma_device_type": 1 00:10:18.023 }, 00:10:18.023 { 00:10:18.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.023 "dma_device_type": 2 00:10:18.023 } 00:10:18.023 ], 00:10:18.023 "driver_specific": {} 00:10:18.023 } 00:10:18.023 ] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.023 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.283 [2024-12-12 19:39:00.867256] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.283 [2024-12-12 19:39:00.867311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.283 [2024-12-12 19:39:00.867336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.283 [2024-12-12 19:39:00.869411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.283 [2024-12-12 19:39:00.869471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.283 "name": "Existed_Raid", 00:10:18.283 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:18.283 "strip_size_kb": 64, 00:10:18.283 "state": "configuring", 00:10:18.283 "raid_level": "raid0", 00:10:18.283 "superblock": true, 00:10:18.283 "num_base_bdevs": 4, 00:10:18.283 "num_base_bdevs_discovered": 3, 00:10:18.283 "num_base_bdevs_operational": 4, 00:10:18.283 "base_bdevs_list": [ 00:10:18.283 { 00:10:18.283 "name": "BaseBdev1", 00:10:18.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.283 "is_configured": false, 00:10:18.283 "data_offset": 0, 00:10:18.283 "data_size": 0 00:10:18.283 }, 00:10:18.283 { 00:10:18.283 "name": "BaseBdev2", 00:10:18.283 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:18.283 "is_configured": true, 00:10:18.283 "data_offset": 2048, 00:10:18.283 "data_size": 63488 00:10:18.283 }, 00:10:18.283 { 00:10:18.283 "name": "BaseBdev3", 00:10:18.283 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:18.283 "is_configured": true, 00:10:18.283 "data_offset": 2048, 00:10:18.283 "data_size": 63488 00:10:18.283 }, 00:10:18.283 { 00:10:18.283 "name": "BaseBdev4", 00:10:18.283 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:18.283 "is_configured": true, 00:10:18.283 "data_offset": 2048, 00:10:18.283 "data_size": 63488 00:10:18.283 } 00:10:18.283 ] 00:10:18.283 }' 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.283 19:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.543 [2024-12-12 19:39:01.306614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.543 "name": "Existed_Raid", 00:10:18.543 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:18.543 "strip_size_kb": 64, 00:10:18.543 "state": "configuring", 00:10:18.543 "raid_level": "raid0", 00:10:18.543 "superblock": true, 00:10:18.543 "num_base_bdevs": 4, 00:10:18.543 "num_base_bdevs_discovered": 2, 00:10:18.543 "num_base_bdevs_operational": 4, 00:10:18.543 "base_bdevs_list": [ 00:10:18.543 { 00:10:18.543 "name": "BaseBdev1", 00:10:18.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.543 "is_configured": false, 00:10:18.543 "data_offset": 0, 00:10:18.543 "data_size": 0 00:10:18.543 }, 00:10:18.543 { 00:10:18.543 "name": null, 00:10:18.543 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:18.543 "is_configured": false, 00:10:18.543 "data_offset": 0, 00:10:18.543 "data_size": 63488 00:10:18.543 }, 00:10:18.543 { 00:10:18.543 "name": "BaseBdev3", 00:10:18.543 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:18.543 "is_configured": true, 00:10:18.543 "data_offset": 2048, 00:10:18.543 "data_size": 63488 00:10:18.543 }, 00:10:18.543 { 00:10:18.543 "name": "BaseBdev4", 00:10:18.543 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:18.543 "is_configured": true, 00:10:18.543 "data_offset": 2048, 00:10:18.543 "data_size": 63488 00:10:18.543 } 00:10:18.543 ] 00:10:18.543 }' 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.543 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 [2024-12-12 19:39:01.822819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.113 BaseBdev1 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 [ 00:10:19.113 { 00:10:19.113 "name": "BaseBdev1", 00:10:19.113 "aliases": [ 00:10:19.113 "70320072-fd21-417f-b61e-906d7bf8e55f" 00:10:19.113 ], 00:10:19.113 "product_name": "Malloc disk", 00:10:19.113 "block_size": 512, 00:10:19.113 "num_blocks": 65536, 00:10:19.113 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:19.113 "assigned_rate_limits": { 00:10:19.113 "rw_ios_per_sec": 0, 00:10:19.113 "rw_mbytes_per_sec": 0, 00:10:19.113 "r_mbytes_per_sec": 0, 00:10:19.113 "w_mbytes_per_sec": 0 00:10:19.113 }, 00:10:19.113 "claimed": true, 00:10:19.113 "claim_type": "exclusive_write", 00:10:19.113 "zoned": false, 00:10:19.113 "supported_io_types": { 00:10:19.113 "read": true, 00:10:19.113 "write": true, 00:10:19.113 "unmap": true, 00:10:19.113 "flush": true, 00:10:19.113 "reset": true, 00:10:19.113 "nvme_admin": false, 00:10:19.113 "nvme_io": false, 00:10:19.113 "nvme_io_md": false, 00:10:19.113 "write_zeroes": true, 00:10:19.113 "zcopy": true, 00:10:19.113 "get_zone_info": false, 00:10:19.113 "zone_management": false, 00:10:19.113 "zone_append": false, 00:10:19.113 "compare": false, 00:10:19.113 "compare_and_write": false, 00:10:19.113 "abort": true, 00:10:19.113 "seek_hole": false, 00:10:19.113 "seek_data": false, 00:10:19.113 "copy": true, 00:10:19.113 "nvme_iov_md": false 00:10:19.113 }, 00:10:19.113 "memory_domains": [ 00:10:19.113 { 00:10:19.113 "dma_device_id": "system", 00:10:19.113 "dma_device_type": 1 00:10:19.113 }, 00:10:19.113 { 00:10:19.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.113 "dma_device_type": 2 00:10:19.113 } 00:10:19.113 ], 00:10:19.113 "driver_specific": {} 00:10:19.113 } 00:10:19.113 ] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.113 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.113 "name": "Existed_Raid", 00:10:19.113 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:19.113 "strip_size_kb": 64, 00:10:19.113 "state": "configuring", 00:10:19.113 "raid_level": "raid0", 00:10:19.113 "superblock": true, 00:10:19.113 "num_base_bdevs": 4, 00:10:19.113 "num_base_bdevs_discovered": 3, 00:10:19.113 "num_base_bdevs_operational": 4, 00:10:19.113 "base_bdevs_list": [ 00:10:19.113 { 00:10:19.113 "name": "BaseBdev1", 00:10:19.113 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:19.113 "is_configured": true, 00:10:19.113 "data_offset": 2048, 00:10:19.113 "data_size": 63488 00:10:19.113 }, 00:10:19.113 { 00:10:19.113 "name": null, 00:10:19.113 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:19.113 "is_configured": false, 00:10:19.113 "data_offset": 0, 00:10:19.113 "data_size": 63488 00:10:19.113 }, 00:10:19.113 { 00:10:19.113 "name": "BaseBdev3", 00:10:19.114 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:19.114 "is_configured": true, 00:10:19.114 "data_offset": 2048, 00:10:19.114 "data_size": 63488 00:10:19.114 }, 00:10:19.114 { 00:10:19.114 "name": "BaseBdev4", 00:10:19.114 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:19.114 "is_configured": true, 00:10:19.114 "data_offset": 2048, 00:10:19.114 "data_size": 63488 00:10:19.114 } 00:10:19.114 ] 00:10:19.114 }' 00:10:19.114 19:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.114 19:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.684 [2024-12-12 19:39:02.346074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.684 "name": "Existed_Raid", 00:10:19.684 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:19.684 "strip_size_kb": 64, 00:10:19.684 "state": "configuring", 00:10:19.684 "raid_level": "raid0", 00:10:19.684 "superblock": true, 00:10:19.684 "num_base_bdevs": 4, 00:10:19.684 "num_base_bdevs_discovered": 2, 00:10:19.684 "num_base_bdevs_operational": 4, 00:10:19.684 "base_bdevs_list": [ 00:10:19.684 { 00:10:19.684 "name": "BaseBdev1", 00:10:19.684 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:19.684 "is_configured": true, 00:10:19.684 "data_offset": 2048, 00:10:19.684 "data_size": 63488 00:10:19.684 }, 00:10:19.684 { 00:10:19.684 "name": null, 00:10:19.684 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:19.684 "is_configured": false, 00:10:19.684 "data_offset": 0, 00:10:19.684 "data_size": 63488 00:10:19.684 }, 00:10:19.684 { 00:10:19.684 "name": null, 00:10:19.684 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:19.684 "is_configured": false, 00:10:19.684 "data_offset": 0, 00:10:19.684 "data_size": 63488 00:10:19.684 }, 00:10:19.684 { 00:10:19.684 "name": "BaseBdev4", 00:10:19.684 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:19.684 "is_configured": true, 00:10:19.684 "data_offset": 2048, 00:10:19.684 "data_size": 63488 00:10:19.684 } 00:10:19.684 ] 00:10:19.684 }' 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.684 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.944 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.944 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.944 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.944 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.944 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.205 [2024-12-12 19:39:02.797226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.205 "name": "Existed_Raid", 00:10:20.205 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:20.205 "strip_size_kb": 64, 00:10:20.205 "state": "configuring", 00:10:20.205 "raid_level": "raid0", 00:10:20.205 "superblock": true, 00:10:20.205 "num_base_bdevs": 4, 00:10:20.205 "num_base_bdevs_discovered": 3, 00:10:20.205 "num_base_bdevs_operational": 4, 00:10:20.205 "base_bdevs_list": [ 00:10:20.205 { 00:10:20.205 "name": "BaseBdev1", 00:10:20.205 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:20.205 "is_configured": true, 00:10:20.205 "data_offset": 2048, 00:10:20.205 "data_size": 63488 00:10:20.205 }, 00:10:20.205 { 00:10:20.205 "name": null, 00:10:20.205 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:20.205 "is_configured": false, 00:10:20.205 "data_offset": 0, 00:10:20.205 "data_size": 63488 00:10:20.205 }, 00:10:20.205 { 00:10:20.205 "name": "BaseBdev3", 00:10:20.205 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:20.205 "is_configured": true, 00:10:20.205 "data_offset": 2048, 00:10:20.205 "data_size": 63488 00:10:20.205 }, 00:10:20.205 { 00:10:20.205 "name": "BaseBdev4", 00:10:20.205 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:20.205 "is_configured": true, 00:10:20.205 "data_offset": 2048, 00:10:20.205 "data_size": 63488 00:10:20.205 } 00:10:20.205 ] 00:10:20.205 }' 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.205 19:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.465 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.465 [2024-12-12 19:39:03.308404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.724 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.725 "name": "Existed_Raid", 00:10:20.725 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:20.725 "strip_size_kb": 64, 00:10:20.725 "state": "configuring", 00:10:20.725 "raid_level": "raid0", 00:10:20.725 "superblock": true, 00:10:20.725 "num_base_bdevs": 4, 00:10:20.725 "num_base_bdevs_discovered": 2, 00:10:20.725 "num_base_bdevs_operational": 4, 00:10:20.725 "base_bdevs_list": [ 00:10:20.725 { 00:10:20.725 "name": null, 00:10:20.725 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:20.725 "is_configured": false, 00:10:20.725 "data_offset": 0, 00:10:20.725 "data_size": 63488 00:10:20.725 }, 00:10:20.725 { 00:10:20.725 "name": null, 00:10:20.725 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:20.725 "is_configured": false, 00:10:20.725 "data_offset": 0, 00:10:20.725 "data_size": 63488 00:10:20.725 }, 00:10:20.725 { 00:10:20.725 "name": "BaseBdev3", 00:10:20.725 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:20.725 "is_configured": true, 00:10:20.725 "data_offset": 2048, 00:10:20.725 "data_size": 63488 00:10:20.725 }, 00:10:20.725 { 00:10:20.725 "name": "BaseBdev4", 00:10:20.725 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:20.725 "is_configured": true, 00:10:20.725 "data_offset": 2048, 00:10:20.725 "data_size": 63488 00:10:20.725 } 00:10:20.725 ] 00:10:20.725 }' 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.725 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.984 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.984 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.984 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.984 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.244 [2024-12-12 19:39:03.873989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.244 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.244 "name": "Existed_Raid", 00:10:21.244 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:21.244 "strip_size_kb": 64, 00:10:21.244 "state": "configuring", 00:10:21.244 "raid_level": "raid0", 00:10:21.244 "superblock": true, 00:10:21.244 "num_base_bdevs": 4, 00:10:21.244 "num_base_bdevs_discovered": 3, 00:10:21.244 "num_base_bdevs_operational": 4, 00:10:21.244 "base_bdevs_list": [ 00:10:21.244 { 00:10:21.244 "name": null, 00:10:21.245 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:21.245 "is_configured": false, 00:10:21.245 "data_offset": 0, 00:10:21.245 "data_size": 63488 00:10:21.245 }, 00:10:21.245 { 00:10:21.245 "name": "BaseBdev2", 00:10:21.245 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:21.245 "is_configured": true, 00:10:21.245 "data_offset": 2048, 00:10:21.245 "data_size": 63488 00:10:21.245 }, 00:10:21.245 { 00:10:21.245 "name": "BaseBdev3", 00:10:21.245 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:21.245 "is_configured": true, 00:10:21.245 "data_offset": 2048, 00:10:21.245 "data_size": 63488 00:10:21.245 }, 00:10:21.245 { 00:10:21.245 "name": "BaseBdev4", 00:10:21.245 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:21.245 "is_configured": true, 00:10:21.245 "data_offset": 2048, 00:10:21.245 "data_size": 63488 00:10:21.245 } 00:10:21.245 ] 00:10:21.245 }' 00:10:21.245 19:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.245 19:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.504 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.504 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.504 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.504 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 70320072-fd21-417f-b61e-906d7bf8e55f 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.764 [2024-12-12 19:39:04.463214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.764 [2024-12-12 19:39:04.463509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.764 [2024-12-12 19:39:04.463527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:21.764 NewBaseBdev 00:10:21.764 [2024-12-12 19:39:04.463914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:21.764 [2024-12-12 19:39:04.464094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.764 [2024-12-12 19:39:04.464108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:21.764 [2024-12-12 19:39:04.464261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.764 [ 00:10:21.764 { 00:10:21.764 "name": "NewBaseBdev", 00:10:21.764 "aliases": [ 00:10:21.764 "70320072-fd21-417f-b61e-906d7bf8e55f" 00:10:21.764 ], 00:10:21.764 "product_name": "Malloc disk", 00:10:21.764 "block_size": 512, 00:10:21.764 "num_blocks": 65536, 00:10:21.764 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:21.764 "assigned_rate_limits": { 00:10:21.764 "rw_ios_per_sec": 0, 00:10:21.764 "rw_mbytes_per_sec": 0, 00:10:21.764 "r_mbytes_per_sec": 0, 00:10:21.764 "w_mbytes_per_sec": 0 00:10:21.764 }, 00:10:21.764 "claimed": true, 00:10:21.764 "claim_type": "exclusive_write", 00:10:21.764 "zoned": false, 00:10:21.764 "supported_io_types": { 00:10:21.764 "read": true, 00:10:21.764 "write": true, 00:10:21.764 "unmap": true, 00:10:21.764 "flush": true, 00:10:21.764 "reset": true, 00:10:21.764 "nvme_admin": false, 00:10:21.764 "nvme_io": false, 00:10:21.764 "nvme_io_md": false, 00:10:21.764 "write_zeroes": true, 00:10:21.764 "zcopy": true, 00:10:21.764 "get_zone_info": false, 00:10:21.764 "zone_management": false, 00:10:21.764 "zone_append": false, 00:10:21.764 "compare": false, 00:10:21.764 "compare_and_write": false, 00:10:21.764 "abort": true, 00:10:21.764 "seek_hole": false, 00:10:21.764 "seek_data": false, 00:10:21.764 "copy": true, 00:10:21.764 "nvme_iov_md": false 00:10:21.764 }, 00:10:21.764 "memory_domains": [ 00:10:21.764 { 00:10:21.764 "dma_device_id": "system", 00:10:21.764 "dma_device_type": 1 00:10:21.764 }, 00:10:21.764 { 00:10:21.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.764 "dma_device_type": 2 00:10:21.764 } 00:10:21.764 ], 00:10:21.764 "driver_specific": {} 00:10:21.764 } 00:10:21.764 ] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.764 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.765 "name": "Existed_Raid", 00:10:21.765 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:21.765 "strip_size_kb": 64, 00:10:21.765 "state": "online", 00:10:21.765 "raid_level": "raid0", 00:10:21.765 "superblock": true, 00:10:21.765 "num_base_bdevs": 4, 00:10:21.765 "num_base_bdevs_discovered": 4, 00:10:21.765 "num_base_bdevs_operational": 4, 00:10:21.765 "base_bdevs_list": [ 00:10:21.765 { 00:10:21.765 "name": "NewBaseBdev", 00:10:21.765 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:21.765 "is_configured": true, 00:10:21.765 "data_offset": 2048, 00:10:21.765 "data_size": 63488 00:10:21.765 }, 00:10:21.765 { 00:10:21.765 "name": "BaseBdev2", 00:10:21.765 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:21.765 "is_configured": true, 00:10:21.765 "data_offset": 2048, 00:10:21.765 "data_size": 63488 00:10:21.765 }, 00:10:21.765 { 00:10:21.765 "name": "BaseBdev3", 00:10:21.765 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:21.765 "is_configured": true, 00:10:21.765 "data_offset": 2048, 00:10:21.765 "data_size": 63488 00:10:21.765 }, 00:10:21.765 { 00:10:21.765 "name": "BaseBdev4", 00:10:21.765 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:21.765 "is_configured": true, 00:10:21.765 "data_offset": 2048, 00:10:21.765 "data_size": 63488 00:10:21.765 } 00:10:21.765 ] 00:10:21.765 }' 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.765 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.334 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.334 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.334 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.334 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.335 [2024-12-12 19:39:04.922902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.335 "name": "Existed_Raid", 00:10:22.335 "aliases": [ 00:10:22.335 "d7271a2f-8642-4738-81b1-cdbed2368e56" 00:10:22.335 ], 00:10:22.335 "product_name": "Raid Volume", 00:10:22.335 "block_size": 512, 00:10:22.335 "num_blocks": 253952, 00:10:22.335 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:22.335 "assigned_rate_limits": { 00:10:22.335 "rw_ios_per_sec": 0, 00:10:22.335 "rw_mbytes_per_sec": 0, 00:10:22.335 "r_mbytes_per_sec": 0, 00:10:22.335 "w_mbytes_per_sec": 0 00:10:22.335 }, 00:10:22.335 "claimed": false, 00:10:22.335 "zoned": false, 00:10:22.335 "supported_io_types": { 00:10:22.335 "read": true, 00:10:22.335 "write": true, 00:10:22.335 "unmap": true, 00:10:22.335 "flush": true, 00:10:22.335 "reset": true, 00:10:22.335 "nvme_admin": false, 00:10:22.335 "nvme_io": false, 00:10:22.335 "nvme_io_md": false, 00:10:22.335 "write_zeroes": true, 00:10:22.335 "zcopy": false, 00:10:22.335 "get_zone_info": false, 00:10:22.335 "zone_management": false, 00:10:22.335 "zone_append": false, 00:10:22.335 "compare": false, 00:10:22.335 "compare_and_write": false, 00:10:22.335 "abort": false, 00:10:22.335 "seek_hole": false, 00:10:22.335 "seek_data": false, 00:10:22.335 "copy": false, 00:10:22.335 "nvme_iov_md": false 00:10:22.335 }, 00:10:22.335 "memory_domains": [ 00:10:22.335 { 00:10:22.335 "dma_device_id": "system", 00:10:22.335 "dma_device_type": 1 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.335 "dma_device_type": 2 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "system", 00:10:22.335 "dma_device_type": 1 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.335 "dma_device_type": 2 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "system", 00:10:22.335 "dma_device_type": 1 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.335 "dma_device_type": 2 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "system", 00:10:22.335 "dma_device_type": 1 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.335 "dma_device_type": 2 00:10:22.335 } 00:10:22.335 ], 00:10:22.335 "driver_specific": { 00:10:22.335 "raid": { 00:10:22.335 "uuid": "d7271a2f-8642-4738-81b1-cdbed2368e56", 00:10:22.335 "strip_size_kb": 64, 00:10:22.335 "state": "online", 00:10:22.335 "raid_level": "raid0", 00:10:22.335 "superblock": true, 00:10:22.335 "num_base_bdevs": 4, 00:10:22.335 "num_base_bdevs_discovered": 4, 00:10:22.335 "num_base_bdevs_operational": 4, 00:10:22.335 "base_bdevs_list": [ 00:10:22.335 { 00:10:22.335 "name": "NewBaseBdev", 00:10:22.335 "uuid": "70320072-fd21-417f-b61e-906d7bf8e55f", 00:10:22.335 "is_configured": true, 00:10:22.335 "data_offset": 2048, 00:10:22.335 "data_size": 63488 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "name": "BaseBdev2", 00:10:22.335 "uuid": "11427a90-cd57-4164-8b8b-9ea2caa7ef11", 00:10:22.335 "is_configured": true, 00:10:22.335 "data_offset": 2048, 00:10:22.335 "data_size": 63488 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "name": "BaseBdev3", 00:10:22.335 "uuid": "ef06c464-4c9c-4aae-9401-616aa73ccc51", 00:10:22.335 "is_configured": true, 00:10:22.335 "data_offset": 2048, 00:10:22.335 "data_size": 63488 00:10:22.335 }, 00:10:22.335 { 00:10:22.335 "name": "BaseBdev4", 00:10:22.335 "uuid": "f151d7c0-893c-4544-85d3-29ab83409c27", 00:10:22.335 "is_configured": true, 00:10:22.335 "data_offset": 2048, 00:10:22.335 "data_size": 63488 00:10:22.335 } 00:10:22.335 ] 00:10:22.335 } 00:10:22.335 } 00:10:22.335 }' 00:10:22.335 19:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.335 BaseBdev2 00:10:22.335 BaseBdev3 00:10:22.335 BaseBdev4' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.335 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.594 [2024-12-12 19:39:05.245892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.594 [2024-12-12 19:39:05.245933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.594 [2024-12-12 19:39:05.246018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.594 [2024-12-12 19:39:05.246102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.594 [2024-12-12 19:39:05.246118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71741 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71741 ']' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71741 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71741 00:10:22.594 killing process with pid 71741 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71741' 00:10:22.594 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71741 00:10:22.595 [2024-12-12 19:39:05.281117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.595 19:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71741 00:10:23.165 [2024-12-12 19:39:05.704882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.105 19:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.105 00:10:24.105 real 0m11.547s 00:10:24.105 user 0m18.024s 00:10:24.105 sys 0m2.211s 00:10:24.105 19:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.105 19:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.105 ************************************ 00:10:24.105 END TEST raid_state_function_test_sb 00:10:24.105 ************************************ 00:10:24.105 19:39:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:24.105 19:39:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.365 19:39:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.365 19:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.365 ************************************ 00:10:24.365 START TEST raid_superblock_test 00:10:24.365 ************************************ 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72411 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72411 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72411 ']' 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.365 19:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.365 [2024-12-12 19:39:07.058846] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:24.365 [2024-12-12 19:39:07.058989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72411 ] 00:10:24.624 [2024-12-12 19:39:07.239674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.624 [2024-12-12 19:39:07.382576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.882 [2024-12-12 19:39:07.608142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.882 [2024-12-12 19:39:07.608227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.141 malloc1 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.141 [2024-12-12 19:39:07.939037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.141 [2024-12-12 19:39:07.939105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.141 [2024-12-12 19:39:07.939128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:25.141 [2024-12-12 19:39:07.939137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.141 [2024-12-12 19:39:07.941568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.141 [2024-12-12 19:39:07.941601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.141 pt1 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.141 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 malloc2 00:10:25.401 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 19:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.401 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 19:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 [2024-12-12 19:39:07.999852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.401 [2024-12-12 19:39:07.999918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.401 [2024-12-12 19:39:07.999943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:25.401 [2024-12-12 19:39:07.999952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.401 [2024-12-12 19:39:08.002308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.401 [2024-12-12 19:39:08.002345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.401 pt2 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 malloc3 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 [2024-12-12 19:39:08.072849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.401 [2024-12-12 19:39:08.072982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.401 [2024-12-12 19:39:08.073008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:25.401 [2024-12-12 19:39:08.073018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.401 [2024-12-12 19:39:08.075607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.401 [2024-12-12 19:39:08.075645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.401 pt3 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 malloc4 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 [2024-12-12 19:39:08.132387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:25.401 [2024-12-12 19:39:08.132449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.401 [2024-12-12 19:39:08.132469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:25.401 [2024-12-12 19:39:08.132478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.401 [2024-12-12 19:39:08.134881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.401 [2024-12-12 19:39:08.134916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:25.401 pt4 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.402 [2024-12-12 19:39:08.144406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.402 [2024-12-12 19:39:08.146484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.402 [2024-12-12 19:39:08.146586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.402 [2024-12-12 19:39:08.146639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:25.402 [2024-12-12 19:39:08.146837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:25.402 [2024-12-12 19:39:08.146854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:25.402 [2024-12-12 19:39:08.147168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:25.402 [2024-12-12 19:39:08.147383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:25.402 [2024-12-12 19:39:08.147404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:25.402 [2024-12-12 19:39:08.147582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.402 "name": "raid_bdev1", 00:10:25.402 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:25.402 "strip_size_kb": 64, 00:10:25.402 "state": "online", 00:10:25.402 "raid_level": "raid0", 00:10:25.402 "superblock": true, 00:10:25.402 "num_base_bdevs": 4, 00:10:25.402 "num_base_bdevs_discovered": 4, 00:10:25.402 "num_base_bdevs_operational": 4, 00:10:25.402 "base_bdevs_list": [ 00:10:25.402 { 00:10:25.402 "name": "pt1", 00:10:25.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.402 "is_configured": true, 00:10:25.402 "data_offset": 2048, 00:10:25.402 "data_size": 63488 00:10:25.402 }, 00:10:25.402 { 00:10:25.402 "name": "pt2", 00:10:25.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.402 "is_configured": true, 00:10:25.402 "data_offset": 2048, 00:10:25.402 "data_size": 63488 00:10:25.402 }, 00:10:25.402 { 00:10:25.402 "name": "pt3", 00:10:25.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.402 "is_configured": true, 00:10:25.402 "data_offset": 2048, 00:10:25.402 "data_size": 63488 00:10:25.402 }, 00:10:25.402 { 00:10:25.402 "name": "pt4", 00:10:25.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.402 "is_configured": true, 00:10:25.402 "data_offset": 2048, 00:10:25.402 "data_size": 63488 00:10:25.402 } 00:10:25.402 ] 00:10:25.402 }' 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.402 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.970 [2024-12-12 19:39:08.596005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.970 "name": "raid_bdev1", 00:10:25.970 "aliases": [ 00:10:25.970 "77f7390b-096d-47c5-b945-e849967624e6" 00:10:25.970 ], 00:10:25.970 "product_name": "Raid Volume", 00:10:25.970 "block_size": 512, 00:10:25.970 "num_blocks": 253952, 00:10:25.970 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:25.970 "assigned_rate_limits": { 00:10:25.970 "rw_ios_per_sec": 0, 00:10:25.970 "rw_mbytes_per_sec": 0, 00:10:25.970 "r_mbytes_per_sec": 0, 00:10:25.970 "w_mbytes_per_sec": 0 00:10:25.970 }, 00:10:25.970 "claimed": false, 00:10:25.970 "zoned": false, 00:10:25.970 "supported_io_types": { 00:10:25.970 "read": true, 00:10:25.970 "write": true, 00:10:25.970 "unmap": true, 00:10:25.970 "flush": true, 00:10:25.970 "reset": true, 00:10:25.970 "nvme_admin": false, 00:10:25.970 "nvme_io": false, 00:10:25.970 "nvme_io_md": false, 00:10:25.970 "write_zeroes": true, 00:10:25.970 "zcopy": false, 00:10:25.970 "get_zone_info": false, 00:10:25.970 "zone_management": false, 00:10:25.970 "zone_append": false, 00:10:25.970 "compare": false, 00:10:25.970 "compare_and_write": false, 00:10:25.970 "abort": false, 00:10:25.970 "seek_hole": false, 00:10:25.970 "seek_data": false, 00:10:25.970 "copy": false, 00:10:25.970 "nvme_iov_md": false 00:10:25.970 }, 00:10:25.970 "memory_domains": [ 00:10:25.970 { 00:10:25.970 "dma_device_id": "system", 00:10:25.970 "dma_device_type": 1 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.970 "dma_device_type": 2 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "system", 00:10:25.970 "dma_device_type": 1 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.970 "dma_device_type": 2 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "system", 00:10:25.970 "dma_device_type": 1 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.970 "dma_device_type": 2 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "system", 00:10:25.970 "dma_device_type": 1 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.970 "dma_device_type": 2 00:10:25.970 } 00:10:25.970 ], 00:10:25.970 "driver_specific": { 00:10:25.970 "raid": { 00:10:25.970 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:25.970 "strip_size_kb": 64, 00:10:25.970 "state": "online", 00:10:25.970 "raid_level": "raid0", 00:10:25.970 "superblock": true, 00:10:25.970 "num_base_bdevs": 4, 00:10:25.970 "num_base_bdevs_discovered": 4, 00:10:25.970 "num_base_bdevs_operational": 4, 00:10:25.970 "base_bdevs_list": [ 00:10:25.970 { 00:10:25.970 "name": "pt1", 00:10:25.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.970 "is_configured": true, 00:10:25.970 "data_offset": 2048, 00:10:25.970 "data_size": 63488 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "name": "pt2", 00:10:25.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.970 "is_configured": true, 00:10:25.970 "data_offset": 2048, 00:10:25.970 "data_size": 63488 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "name": "pt3", 00:10:25.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.970 "is_configured": true, 00:10:25.970 "data_offset": 2048, 00:10:25.970 "data_size": 63488 00:10:25.970 }, 00:10:25.970 { 00:10:25.970 "name": "pt4", 00:10:25.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.970 "is_configured": true, 00:10:25.970 "data_offset": 2048, 00:10:25.970 "data_size": 63488 00:10:25.970 } 00:10:25.970 ] 00:10:25.970 } 00:10:25.970 } 00:10:25.970 }' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.970 pt2 00:10:25.970 pt3 00:10:25.970 pt4' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.970 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.971 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 [2024-12-12 19:39:08.907340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=77f7390b-096d-47c5-b945-e849967624e6 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 77f7390b-096d-47c5-b945-e849967624e6 ']' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 [2024-12-12 19:39:08.943040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.231 [2024-12-12 19:39:08.943085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.231 [2024-12-12 19:39:08.943204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.231 [2024-12-12 19:39:08.943288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.231 [2024-12-12 19:39:08.943305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.232 19:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:26.232 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.537 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 [2024-12-12 19:39:09.102774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:26.537 [2024-12-12 19:39:09.105015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:26.537 [2024-12-12 19:39:09.105071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:26.537 [2024-12-12 19:39:09.105107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:26.537 [2024-12-12 19:39:09.105166] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:26.537 [2024-12-12 19:39:09.105221] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:26.538 [2024-12-12 19:39:09.105244] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:26.538 [2024-12-12 19:39:09.105264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:26.538 [2024-12-12 19:39:09.105278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.538 [2024-12-12 19:39:09.105292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:26.538 request: 00:10:26.538 { 00:10:26.538 "name": "raid_bdev1", 00:10:26.538 "raid_level": "raid0", 00:10:26.538 "base_bdevs": [ 00:10:26.538 "malloc1", 00:10:26.538 "malloc2", 00:10:26.538 "malloc3", 00:10:26.538 "malloc4" 00:10:26.538 ], 00:10:26.538 "strip_size_kb": 64, 00:10:26.538 "superblock": false, 00:10:26.538 "method": "bdev_raid_create", 00:10:26.538 "req_id": 1 00:10:26.538 } 00:10:26.538 Got JSON-RPC error response 00:10:26.538 response: 00:10:26.538 { 00:10:26.538 "code": -17, 00:10:26.538 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:26.538 } 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.538 [2024-12-12 19:39:09.170690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:26.538 [2024-12-12 19:39:09.170747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.538 [2024-12-12 19:39:09.170767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:26.538 [2024-12-12 19:39:09.170778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.538 [2024-12-12 19:39:09.173286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.538 [2024-12-12 19:39:09.173327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:26.538 [2024-12-12 19:39:09.173416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:26.538 [2024-12-12 19:39:09.173480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:26.538 pt1 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.538 "name": "raid_bdev1", 00:10:26.538 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:26.538 "strip_size_kb": 64, 00:10:26.538 "state": "configuring", 00:10:26.538 "raid_level": "raid0", 00:10:26.538 "superblock": true, 00:10:26.538 "num_base_bdevs": 4, 00:10:26.538 "num_base_bdevs_discovered": 1, 00:10:26.538 "num_base_bdevs_operational": 4, 00:10:26.538 "base_bdevs_list": [ 00:10:26.538 { 00:10:26.538 "name": "pt1", 00:10:26.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.538 "is_configured": true, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 }, 00:10:26.538 { 00:10:26.538 "name": null, 00:10:26.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.538 "is_configured": false, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 }, 00:10:26.538 { 00:10:26.538 "name": null, 00:10:26.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.538 "is_configured": false, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 }, 00:10:26.538 { 00:10:26.538 "name": null, 00:10:26.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.538 "is_configured": false, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 } 00:10:26.538 ] 00:10:26.538 }' 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.538 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.798 [2024-12-12 19:39:09.598089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.798 [2024-12-12 19:39:09.598200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.798 [2024-12-12 19:39:09.598225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:26.798 [2024-12-12 19:39:09.598237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.798 [2024-12-12 19:39:09.598860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.798 [2024-12-12 19:39:09.598897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.798 [2024-12-12 19:39:09.599014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.798 [2024-12-12 19:39:09.599052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.798 pt2 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.798 [2024-12-12 19:39:09.606023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.798 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.059 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.059 "name": "raid_bdev1", 00:10:27.059 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:27.059 "strip_size_kb": 64, 00:10:27.059 "state": "configuring", 00:10:27.059 "raid_level": "raid0", 00:10:27.059 "superblock": true, 00:10:27.060 "num_base_bdevs": 4, 00:10:27.060 "num_base_bdevs_discovered": 1, 00:10:27.060 "num_base_bdevs_operational": 4, 00:10:27.060 "base_bdevs_list": [ 00:10:27.060 { 00:10:27.060 "name": "pt1", 00:10:27.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.060 "is_configured": true, 00:10:27.060 "data_offset": 2048, 00:10:27.060 "data_size": 63488 00:10:27.060 }, 00:10:27.060 { 00:10:27.060 "name": null, 00:10:27.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.060 "is_configured": false, 00:10:27.060 "data_offset": 0, 00:10:27.060 "data_size": 63488 00:10:27.060 }, 00:10:27.060 { 00:10:27.060 "name": null, 00:10:27.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.060 "is_configured": false, 00:10:27.060 "data_offset": 2048, 00:10:27.060 "data_size": 63488 00:10:27.060 }, 00:10:27.060 { 00:10:27.060 "name": null, 00:10:27.060 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.060 "is_configured": false, 00:10:27.060 "data_offset": 2048, 00:10:27.060 "data_size": 63488 00:10:27.060 } 00:10:27.060 ] 00:10:27.060 }' 00:10:27.060 19:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.060 19:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 [2024-12-12 19:39:10.045251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.339 [2024-12-12 19:39:10.045332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.339 [2024-12-12 19:39:10.045357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:27.339 [2024-12-12 19:39:10.045366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.339 [2024-12-12 19:39:10.045961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.339 [2024-12-12 19:39:10.045988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.339 [2024-12-12 19:39:10.046084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.339 [2024-12-12 19:39:10.046127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.339 pt2 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 [2024-12-12 19:39:10.053207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.339 [2024-12-12 19:39:10.053259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.339 [2024-12-12 19:39:10.053277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:27.339 [2024-12-12 19:39:10.053285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.339 [2024-12-12 19:39:10.053723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.339 [2024-12-12 19:39:10.053748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.339 [2024-12-12 19:39:10.053817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.339 [2024-12-12 19:39:10.053844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.339 pt3 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 [2024-12-12 19:39:10.061182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:27.339 [2024-12-12 19:39:10.061226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.339 [2024-12-12 19:39:10.061243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:27.339 [2024-12-12 19:39:10.061251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.339 [2024-12-12 19:39:10.061662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.339 [2024-12-12 19:39:10.061687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:27.339 [2024-12-12 19:39:10.061754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:27.339 [2024-12-12 19:39:10.061777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:27.339 [2024-12-12 19:39:10.061939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.339 [2024-12-12 19:39:10.061956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:27.339 [2024-12-12 19:39:10.062228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:27.339 [2024-12-12 19:39:10.062396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.339 [2024-12-12 19:39:10.062418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:27.339 [2024-12-12 19:39:10.062597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.339 pt4 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.339 "name": "raid_bdev1", 00:10:27.339 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:27.339 "strip_size_kb": 64, 00:10:27.339 "state": "online", 00:10:27.339 "raid_level": "raid0", 00:10:27.339 "superblock": true, 00:10:27.339 "num_base_bdevs": 4, 00:10:27.339 "num_base_bdevs_discovered": 4, 00:10:27.339 "num_base_bdevs_operational": 4, 00:10:27.339 "base_bdevs_list": [ 00:10:27.339 { 00:10:27.339 "name": "pt1", 00:10:27.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.339 "is_configured": true, 00:10:27.339 "data_offset": 2048, 00:10:27.339 "data_size": 63488 00:10:27.339 }, 00:10:27.339 { 00:10:27.339 "name": "pt2", 00:10:27.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.339 "is_configured": true, 00:10:27.339 "data_offset": 2048, 00:10:27.339 "data_size": 63488 00:10:27.339 }, 00:10:27.339 { 00:10:27.339 "name": "pt3", 00:10:27.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.339 "is_configured": true, 00:10:27.339 "data_offset": 2048, 00:10:27.339 "data_size": 63488 00:10:27.339 }, 00:10:27.339 { 00:10:27.339 "name": "pt4", 00:10:27.339 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.339 "is_configured": true, 00:10:27.339 "data_offset": 2048, 00:10:27.339 "data_size": 63488 00:10:27.339 } 00:10:27.339 ] 00:10:27.339 }' 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.339 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.909 [2024-12-12 19:39:10.525071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.909 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.909 "name": "raid_bdev1", 00:10:27.909 "aliases": [ 00:10:27.909 "77f7390b-096d-47c5-b945-e849967624e6" 00:10:27.909 ], 00:10:27.909 "product_name": "Raid Volume", 00:10:27.909 "block_size": 512, 00:10:27.909 "num_blocks": 253952, 00:10:27.909 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:27.909 "assigned_rate_limits": { 00:10:27.909 "rw_ios_per_sec": 0, 00:10:27.909 "rw_mbytes_per_sec": 0, 00:10:27.909 "r_mbytes_per_sec": 0, 00:10:27.909 "w_mbytes_per_sec": 0 00:10:27.909 }, 00:10:27.909 "claimed": false, 00:10:27.909 "zoned": false, 00:10:27.909 "supported_io_types": { 00:10:27.909 "read": true, 00:10:27.909 "write": true, 00:10:27.909 "unmap": true, 00:10:27.909 "flush": true, 00:10:27.909 "reset": true, 00:10:27.909 "nvme_admin": false, 00:10:27.909 "nvme_io": false, 00:10:27.909 "nvme_io_md": false, 00:10:27.909 "write_zeroes": true, 00:10:27.909 "zcopy": false, 00:10:27.909 "get_zone_info": false, 00:10:27.909 "zone_management": false, 00:10:27.909 "zone_append": false, 00:10:27.909 "compare": false, 00:10:27.909 "compare_and_write": false, 00:10:27.909 "abort": false, 00:10:27.909 "seek_hole": false, 00:10:27.909 "seek_data": false, 00:10:27.909 "copy": false, 00:10:27.909 "nvme_iov_md": false 00:10:27.909 }, 00:10:27.909 "memory_domains": [ 00:10:27.909 { 00:10:27.909 "dma_device_id": "system", 00:10:27.909 "dma_device_type": 1 00:10:27.909 }, 00:10:27.909 { 00:10:27.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.909 "dma_device_type": 2 00:10:27.909 }, 00:10:27.909 { 00:10:27.909 "dma_device_id": "system", 00:10:27.909 "dma_device_type": 1 00:10:27.909 }, 00:10:27.909 { 00:10:27.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.909 "dma_device_type": 2 00:10:27.909 }, 00:10:27.909 { 00:10:27.909 "dma_device_id": "system", 00:10:27.909 "dma_device_type": 1 00:10:27.909 }, 00:10:27.909 { 00:10:27.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.909 "dma_device_type": 2 00:10:27.909 }, 00:10:27.910 { 00:10:27.910 "dma_device_id": "system", 00:10:27.910 "dma_device_type": 1 00:10:27.910 }, 00:10:27.910 { 00:10:27.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.910 "dma_device_type": 2 00:10:27.910 } 00:10:27.910 ], 00:10:27.910 "driver_specific": { 00:10:27.910 "raid": { 00:10:27.910 "uuid": "77f7390b-096d-47c5-b945-e849967624e6", 00:10:27.910 "strip_size_kb": 64, 00:10:27.910 "state": "online", 00:10:27.910 "raid_level": "raid0", 00:10:27.910 "superblock": true, 00:10:27.910 "num_base_bdevs": 4, 00:10:27.910 "num_base_bdevs_discovered": 4, 00:10:27.910 "num_base_bdevs_operational": 4, 00:10:27.910 "base_bdevs_list": [ 00:10:27.910 { 00:10:27.910 "name": "pt1", 00:10:27.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.910 "is_configured": true, 00:10:27.910 "data_offset": 2048, 00:10:27.910 "data_size": 63488 00:10:27.910 }, 00:10:27.910 { 00:10:27.910 "name": "pt2", 00:10:27.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.910 "is_configured": true, 00:10:27.910 "data_offset": 2048, 00:10:27.910 "data_size": 63488 00:10:27.910 }, 00:10:27.910 { 00:10:27.910 "name": "pt3", 00:10:27.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.910 "is_configured": true, 00:10:27.910 "data_offset": 2048, 00:10:27.910 "data_size": 63488 00:10:27.910 }, 00:10:27.910 { 00:10:27.910 "name": "pt4", 00:10:27.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.910 "is_configured": true, 00:10:27.910 "data_offset": 2048, 00:10:27.910 "data_size": 63488 00:10:27.910 } 00:10:27.910 ] 00:10:27.910 } 00:10:27.910 } 00:10:27.910 }' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:27.910 pt2 00:10:27.910 pt3 00:10:27.910 pt4' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.910 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.170 [2024-12-12 19:39:10.868308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 77f7390b-096d-47c5-b945-e849967624e6 '!=' 77f7390b-096d-47c5-b945-e849967624e6 ']' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72411 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72411 ']' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72411 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72411 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.170 killing process with pid 72411 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72411' 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72411 00:10:28.170 [2024-12-12 19:39:10.950042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.170 [2024-12-12 19:39:10.950154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.170 19:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72411 00:10:28.170 [2024-12-12 19:39:10.950269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.170 [2024-12-12 19:39:10.950280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:28.740 [2024-12-12 19:39:11.380643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.119 19:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:30.119 00:10:30.119 real 0m5.627s 00:10:30.119 user 0m7.847s 00:10:30.119 sys 0m1.088s 00:10:30.119 19:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.120 19:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.120 ************************************ 00:10:30.120 END TEST raid_superblock_test 00:10:30.120 ************************************ 00:10:30.120 19:39:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:30.120 19:39:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.120 19:39:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.120 19:39:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.120 ************************************ 00:10:30.120 START TEST raid_read_error_test 00:10:30.120 ************************************ 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4rK9PFViIT 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72681 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72681 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72681 ']' 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.120 19:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.120 [2024-12-12 19:39:12.764254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:30.120 [2024-12-12 19:39:12.764365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:10:30.120 [2024-12-12 19:39:12.936555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.379 [2024-12-12 19:39:13.070679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.638 [2024-12-12 19:39:13.307962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.638 [2024-12-12 19:39:13.308017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 BaseBdev1_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 true 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 [2024-12-12 19:39:13.664354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.897 [2024-12-12 19:39:13.664423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.897 [2024-12-12 19:39:13.664446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.897 [2024-12-12 19:39:13.664458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.897 [2024-12-12 19:39:13.666942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.897 [2024-12-12 19:39:13.666983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.897 BaseBdev1 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 BaseBdev2_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 true 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.897 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 [2024-12-12 19:39:13.738386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.897 [2024-12-12 19:39:13.738448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.897 [2024-12-12 19:39:13.738465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.897 [2024-12-12 19:39:13.738477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.157 [2024-12-12 19:39:13.740878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.157 [2024-12-12 19:39:13.740915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.157 BaseBdev2 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.157 BaseBdev3_malloc 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.157 true 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.157 [2024-12-12 19:39:13.822730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.157 [2024-12-12 19:39:13.822800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.157 [2024-12-12 19:39:13.822820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:31.157 [2024-12-12 19:39:13.822832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.157 [2024-12-12 19:39:13.825249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.157 [2024-12-12 19:39:13.825289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.157 BaseBdev3 00:10:31.157 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 BaseBdev4_malloc 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 true 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 [2024-12-12 19:39:13.897257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:31.158 [2024-12-12 19:39:13.897339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.158 [2024-12-12 19:39:13.897363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:31.158 [2024-12-12 19:39:13.897375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.158 [2024-12-12 19:39:13.899965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.158 [2024-12-12 19:39:13.900010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:31.158 BaseBdev4 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 [2024-12-12 19:39:13.909295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.158 [2024-12-12 19:39:13.911435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.158 [2024-12-12 19:39:13.911519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.158 [2024-12-12 19:39:13.911593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.158 [2024-12-12 19:39:13.911825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:31.158 [2024-12-12 19:39:13.911849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:31.158 [2024-12-12 19:39:13.912145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:31.158 [2024-12-12 19:39:13.912341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:31.158 [2024-12-12 19:39:13.912360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:31.158 [2024-12-12 19:39:13.912561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.158 "name": "raid_bdev1", 00:10:31.158 "uuid": "c81c2b58-3042-4de6-9836-e81823f1a321", 00:10:31.158 "strip_size_kb": 64, 00:10:31.158 "state": "online", 00:10:31.158 "raid_level": "raid0", 00:10:31.158 "superblock": true, 00:10:31.158 "num_base_bdevs": 4, 00:10:31.158 "num_base_bdevs_discovered": 4, 00:10:31.158 "num_base_bdevs_operational": 4, 00:10:31.158 "base_bdevs_list": [ 00:10:31.158 { 00:10:31.158 "name": "BaseBdev1", 00:10:31.158 "uuid": "4ebc7d90-f125-5717-bb34-8fb90ac87e64", 00:10:31.158 "is_configured": true, 00:10:31.158 "data_offset": 2048, 00:10:31.158 "data_size": 63488 00:10:31.158 }, 00:10:31.158 { 00:10:31.158 "name": "BaseBdev2", 00:10:31.158 "uuid": "3a6095d5-71db-58c2-900b-7c1ffe880cb0", 00:10:31.158 "is_configured": true, 00:10:31.158 "data_offset": 2048, 00:10:31.158 "data_size": 63488 00:10:31.158 }, 00:10:31.158 { 00:10:31.158 "name": "BaseBdev3", 00:10:31.158 "uuid": "251e249f-0402-51b1-b227-aa5ea28c2f57", 00:10:31.158 "is_configured": true, 00:10:31.158 "data_offset": 2048, 00:10:31.158 "data_size": 63488 00:10:31.158 }, 00:10:31.158 { 00:10:31.158 "name": "BaseBdev4", 00:10:31.158 "uuid": "94ec47c4-7fd9-5f16-baf6-27d28d7a3ebb", 00:10:31.158 "is_configured": true, 00:10:31.158 "data_offset": 2048, 00:10:31.158 "data_size": 63488 00:10:31.158 } 00:10:31.158 ] 00:10:31.158 }' 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.158 19:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.727 19:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.727 19:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.727 [2024-12-12 19:39:14.449931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.665 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.665 "name": "raid_bdev1", 00:10:32.665 "uuid": "c81c2b58-3042-4de6-9836-e81823f1a321", 00:10:32.665 "strip_size_kb": 64, 00:10:32.665 "state": "online", 00:10:32.665 "raid_level": "raid0", 00:10:32.665 "superblock": true, 00:10:32.665 "num_base_bdevs": 4, 00:10:32.665 "num_base_bdevs_discovered": 4, 00:10:32.665 "num_base_bdevs_operational": 4, 00:10:32.665 "base_bdevs_list": [ 00:10:32.665 { 00:10:32.665 "name": "BaseBdev1", 00:10:32.665 "uuid": "4ebc7d90-f125-5717-bb34-8fb90ac87e64", 00:10:32.665 "is_configured": true, 00:10:32.665 "data_offset": 2048, 00:10:32.665 "data_size": 63488 00:10:32.665 }, 00:10:32.665 { 00:10:32.665 "name": "BaseBdev2", 00:10:32.666 "uuid": "3a6095d5-71db-58c2-900b-7c1ffe880cb0", 00:10:32.666 "is_configured": true, 00:10:32.666 "data_offset": 2048, 00:10:32.666 "data_size": 63488 00:10:32.666 }, 00:10:32.666 { 00:10:32.666 "name": "BaseBdev3", 00:10:32.666 "uuid": "251e249f-0402-51b1-b227-aa5ea28c2f57", 00:10:32.666 "is_configured": true, 00:10:32.666 "data_offset": 2048, 00:10:32.666 "data_size": 63488 00:10:32.666 }, 00:10:32.666 { 00:10:32.666 "name": "BaseBdev4", 00:10:32.666 "uuid": "94ec47c4-7fd9-5f16-baf6-27d28d7a3ebb", 00:10:32.666 "is_configured": true, 00:10:32.666 "data_offset": 2048, 00:10:32.666 "data_size": 63488 00:10:32.666 } 00:10:32.666 ] 00:10:32.666 }' 00:10:32.666 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.666 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.234 [2024-12-12 19:39:15.790783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.234 [2024-12-12 19:39:15.790837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.234 [2024-12-12 19:39:15.793571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.234 [2024-12-12 19:39:15.793647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.234 [2024-12-12 19:39:15.793695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.234 [2024-12-12 19:39:15.793709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:33.234 { 00:10:33.234 "results": [ 00:10:33.234 { 00:10:33.234 "job": "raid_bdev1", 00:10:33.234 "core_mask": "0x1", 00:10:33.234 "workload": "randrw", 00:10:33.234 "percentage": 50, 00:10:33.234 "status": "finished", 00:10:33.234 "queue_depth": 1, 00:10:33.234 "io_size": 131072, 00:10:33.234 "runtime": 1.341388, 00:10:33.234 "iops": 13227.343617208444, 00:10:33.234 "mibps": 1653.4179521510555, 00:10:33.234 "io_failed": 1, 00:10:33.234 "io_timeout": 0, 00:10:33.234 "avg_latency_us": 106.29880020948099, 00:10:33.234 "min_latency_us": 27.053275109170304, 00:10:33.234 "max_latency_us": 1409.4532751091704 00:10:33.234 } 00:10:33.234 ], 00:10:33.234 "core_count": 1 00:10:33.234 } 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72681 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72681 ']' 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72681 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72681 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.234 killing process with pid 72681 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72681' 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72681 00:10:33.234 [2024-12-12 19:39:15.838120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.234 19:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72681 00:10:33.494 [2024-12-12 19:39:16.199873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4rK9PFViIT 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:34.874 00:10:34.874 real 0m4.874s 00:10:34.874 user 0m5.555s 00:10:34.874 sys 0m0.704s 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.874 19:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.874 ************************************ 00:10:34.874 END TEST raid_read_error_test 00:10:34.874 ************************************ 00:10:34.874 19:39:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:34.874 19:39:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.874 19:39:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.874 19:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.874 ************************************ 00:10:34.874 START TEST raid_write_error_test 00:10:34.874 ************************************ 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fi0PNqkTni 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72827 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72827 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72827 ']' 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.874 19:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.874 [2024-12-12 19:39:17.714510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:34.874 [2024-12-12 19:39:17.714649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:10:35.134 [2024-12-12 19:39:17.891193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.393 [2024-12-12 19:39:18.036790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.652 [2024-12-12 19:39:18.281237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.652 [2024-12-12 19:39:18.281300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 BaseBdev1_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 true 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 [2024-12-12 19:39:18.597801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.911 [2024-12-12 19:39:18.597873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.911 [2024-12-12 19:39:18.597894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.911 [2024-12-12 19:39:18.597906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.911 [2024-12-12 19:39:18.600323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.911 [2024-12-12 19:39:18.600362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.911 BaseBdev1 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 BaseBdev2_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 true 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 [2024-12-12 19:39:18.672493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.911 [2024-12-12 19:39:18.672569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.911 [2024-12-12 19:39:18.672587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.911 [2024-12-12 19:39:18.672598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.911 [2024-12-12 19:39:18.675036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.911 [2024-12-12 19:39:18.675079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.911 BaseBdev2 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 BaseBdev3_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.911 true 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.911 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 [2024-12-12 19:39:18.756993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:36.172 [2024-12-12 19:39:18.757077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.172 [2024-12-12 19:39:18.757103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:36.172 [2024-12-12 19:39:18.757115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.172 [2024-12-12 19:39:18.759804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.172 [2024-12-12 19:39:18.759849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.172 BaseBdev3 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 BaseBdev4_malloc 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 true 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 [2024-12-12 19:39:18.827412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:36.172 [2024-12-12 19:39:18.827467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.172 [2024-12-12 19:39:18.827485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:36.172 [2024-12-12 19:39:18.827497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.172 [2024-12-12 19:39:18.829939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.172 [2024-12-12 19:39:18.829979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:36.172 BaseBdev4 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 [2024-12-12 19:39:18.839462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.172 [2024-12-12 19:39:18.841531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.172 [2024-12-12 19:39:18.841622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.172 [2024-12-12 19:39:18.841691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.172 [2024-12-12 19:39:18.841912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:36.172 [2024-12-12 19:39:18.841936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.172 [2024-12-12 19:39:18.842199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:36.172 [2024-12-12 19:39:18.842390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:36.172 [2024-12-12 19:39:18.842409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:36.172 [2024-12-12 19:39:18.842599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.172 "name": "raid_bdev1", 00:10:36.172 "uuid": "04c8b231-55b4-437f-92b5-8c7a3d4b74f4", 00:10:36.172 "strip_size_kb": 64, 00:10:36.172 "state": "online", 00:10:36.172 "raid_level": "raid0", 00:10:36.172 "superblock": true, 00:10:36.172 "num_base_bdevs": 4, 00:10:36.172 "num_base_bdevs_discovered": 4, 00:10:36.172 "num_base_bdevs_operational": 4, 00:10:36.172 "base_bdevs_list": [ 00:10:36.172 { 00:10:36.172 "name": "BaseBdev1", 00:10:36.172 "uuid": "d3e618ee-5d95-589e-aebc-3e2d9e672989", 00:10:36.172 "is_configured": true, 00:10:36.172 "data_offset": 2048, 00:10:36.172 "data_size": 63488 00:10:36.172 }, 00:10:36.172 { 00:10:36.172 "name": "BaseBdev2", 00:10:36.172 "uuid": "a53a3763-ccde-58b6-8150-218ee8804787", 00:10:36.172 "is_configured": true, 00:10:36.172 "data_offset": 2048, 00:10:36.172 "data_size": 63488 00:10:36.172 }, 00:10:36.172 { 00:10:36.172 "name": "BaseBdev3", 00:10:36.172 "uuid": "3a9f04a8-f8d3-58af-965d-1eac8296bddc", 00:10:36.172 "is_configured": true, 00:10:36.172 "data_offset": 2048, 00:10:36.172 "data_size": 63488 00:10:36.172 }, 00:10:36.172 { 00:10:36.172 "name": "BaseBdev4", 00:10:36.172 "uuid": "e8eb56f7-eff6-5087-8943-3743002c2633", 00:10:36.172 "is_configured": true, 00:10:36.172 "data_offset": 2048, 00:10:36.172 "data_size": 63488 00:10:36.172 } 00:10:36.172 ] 00:10:36.172 }' 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.172 19:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.741 19:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.741 19:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.741 [2024-12-12 19:39:19.396070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:37.678 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.679 "name": "raid_bdev1", 00:10:37.679 "uuid": "04c8b231-55b4-437f-92b5-8c7a3d4b74f4", 00:10:37.679 "strip_size_kb": 64, 00:10:37.679 "state": "online", 00:10:37.679 "raid_level": "raid0", 00:10:37.679 "superblock": true, 00:10:37.679 "num_base_bdevs": 4, 00:10:37.679 "num_base_bdevs_discovered": 4, 00:10:37.679 "num_base_bdevs_operational": 4, 00:10:37.679 "base_bdevs_list": [ 00:10:37.679 { 00:10:37.679 "name": "BaseBdev1", 00:10:37.679 "uuid": "d3e618ee-5d95-589e-aebc-3e2d9e672989", 00:10:37.679 "is_configured": true, 00:10:37.679 "data_offset": 2048, 00:10:37.679 "data_size": 63488 00:10:37.679 }, 00:10:37.679 { 00:10:37.679 "name": "BaseBdev2", 00:10:37.679 "uuid": "a53a3763-ccde-58b6-8150-218ee8804787", 00:10:37.679 "is_configured": true, 00:10:37.679 "data_offset": 2048, 00:10:37.679 "data_size": 63488 00:10:37.679 }, 00:10:37.679 { 00:10:37.679 "name": "BaseBdev3", 00:10:37.679 "uuid": "3a9f04a8-f8d3-58af-965d-1eac8296bddc", 00:10:37.679 "is_configured": true, 00:10:37.679 "data_offset": 2048, 00:10:37.679 "data_size": 63488 00:10:37.679 }, 00:10:37.679 { 00:10:37.679 "name": "BaseBdev4", 00:10:37.679 "uuid": "e8eb56f7-eff6-5087-8943-3743002c2633", 00:10:37.679 "is_configured": true, 00:10:37.679 "data_offset": 2048, 00:10:37.679 "data_size": 63488 00:10:37.679 } 00:10:37.679 ] 00:10:37.679 }' 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.679 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.029 [2024-12-12 19:39:20.785152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.029 [2024-12-12 19:39:20.785224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.029 [2024-12-12 19:39:20.787994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.029 [2024-12-12 19:39:20.788067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.029 [2024-12-12 19:39:20.788115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.029 [2024-12-12 19:39:20.788127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:38.029 { 00:10:38.029 "results": [ 00:10:38.029 { 00:10:38.029 "job": "raid_bdev1", 00:10:38.029 "core_mask": "0x1", 00:10:38.029 "workload": "randrw", 00:10:38.029 "percentage": 50, 00:10:38.029 "status": "finished", 00:10:38.029 "queue_depth": 1, 00:10:38.029 "io_size": 131072, 00:10:38.029 "runtime": 1.389681, 00:10:38.029 "iops": 13385.80580723202, 00:10:38.029 "mibps": 1673.2257259040025, 00:10:38.029 "io_failed": 1, 00:10:38.029 "io_timeout": 0, 00:10:38.029 "avg_latency_us": 105.01994424057536, 00:10:38.029 "min_latency_us": 27.165065502183406, 00:10:38.029 "max_latency_us": 1402.2986899563318 00:10:38.029 } 00:10:38.029 ], 00:10:38.029 "core_count": 1 00:10:38.029 } 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72827 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72827 ']' 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72827 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72827 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72827' 00:10:38.029 killing process with pid 72827 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72827 00:10:38.029 [2024-12-12 19:39:20.831854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.029 19:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72827 00:10:38.609 [2024-12-12 19:39:21.186751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fi0PNqkTni 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:39.987 00:10:39.987 real 0m4.888s 00:10:39.987 user 0m5.635s 00:10:39.987 sys 0m0.679s 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.987 ************************************ 00:10:39.987 END TEST raid_write_error_test 00:10:39.987 ************************************ 00:10:39.987 19:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 19:39:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:39.987 19:39:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:39.987 19:39:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.987 19:39:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.987 19:39:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 ************************************ 00:10:39.987 START TEST raid_state_function_test 00:10:39.987 ************************************ 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:39.987 Process raid pid: 72976 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72976 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72976' 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72976 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72976 ']' 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.987 19:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 [2024-12-12 19:39:22.663790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:39.987 [2024-12-12 19:39:22.664000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.247 [2024-12-12 19:39:22.837232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.247 [2024-12-12 19:39:22.969093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.506 [2024-12-12 19:39:23.198089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.506 [2024-12-12 19:39:23.198192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.765 [2024-12-12 19:39:23.486771] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.765 [2024-12-12 19:39:23.486911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.765 [2024-12-12 19:39:23.486941] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.765 [2024-12-12 19:39:23.486966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.765 [2024-12-12 19:39:23.486984] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.765 [2024-12-12 19:39:23.487007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.765 [2024-12-12 19:39:23.487024] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.765 [2024-12-12 19:39:23.487070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.765 "name": "Existed_Raid", 00:10:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.765 "strip_size_kb": 64, 00:10:40.765 "state": "configuring", 00:10:40.765 "raid_level": "concat", 00:10:40.765 "superblock": false, 00:10:40.765 "num_base_bdevs": 4, 00:10:40.765 "num_base_bdevs_discovered": 0, 00:10:40.765 "num_base_bdevs_operational": 4, 00:10:40.765 "base_bdevs_list": [ 00:10:40.765 { 00:10:40.765 "name": "BaseBdev1", 00:10:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.765 "is_configured": false, 00:10:40.765 "data_offset": 0, 00:10:40.765 "data_size": 0 00:10:40.765 }, 00:10:40.765 { 00:10:40.765 "name": "BaseBdev2", 00:10:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.765 "is_configured": false, 00:10:40.765 "data_offset": 0, 00:10:40.765 "data_size": 0 00:10:40.765 }, 00:10:40.765 { 00:10:40.765 "name": "BaseBdev3", 00:10:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.765 "is_configured": false, 00:10:40.765 "data_offset": 0, 00:10:40.765 "data_size": 0 00:10:40.765 }, 00:10:40.765 { 00:10:40.765 "name": "BaseBdev4", 00:10:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.765 "is_configured": false, 00:10:40.765 "data_offset": 0, 00:10:40.765 "data_size": 0 00:10:40.765 } 00:10:40.765 ] 00:10:40.765 }' 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.765 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [2024-12-12 19:39:23.945992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.333 [2024-12-12 19:39:23.946140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [2024-12-12 19:39:23.957913] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.333 [2024-12-12 19:39:23.957996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.333 [2024-12-12 19:39:23.958021] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.333 [2024-12-12 19:39:23.958043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.333 [2024-12-12 19:39:23.958059] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.333 [2024-12-12 19:39:23.958079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.333 [2024-12-12 19:39:23.958095] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.333 [2024-12-12 19:39:23.958115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.333 19:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [2024-12-12 19:39:24.012294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.333 BaseBdev1 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.333 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [ 00:10:41.333 { 00:10:41.333 "name": "BaseBdev1", 00:10:41.333 "aliases": [ 00:10:41.333 "54a619ba-2e0b-4613-96ef-b237a5b69b11" 00:10:41.333 ], 00:10:41.333 "product_name": "Malloc disk", 00:10:41.333 "block_size": 512, 00:10:41.333 "num_blocks": 65536, 00:10:41.333 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:41.333 "assigned_rate_limits": { 00:10:41.333 "rw_ios_per_sec": 0, 00:10:41.333 "rw_mbytes_per_sec": 0, 00:10:41.333 "r_mbytes_per_sec": 0, 00:10:41.333 "w_mbytes_per_sec": 0 00:10:41.333 }, 00:10:41.333 "claimed": true, 00:10:41.333 "claim_type": "exclusive_write", 00:10:41.333 "zoned": false, 00:10:41.333 "supported_io_types": { 00:10:41.333 "read": true, 00:10:41.333 "write": true, 00:10:41.333 "unmap": true, 00:10:41.333 "flush": true, 00:10:41.333 "reset": true, 00:10:41.333 "nvme_admin": false, 00:10:41.333 "nvme_io": false, 00:10:41.333 "nvme_io_md": false, 00:10:41.333 "write_zeroes": true, 00:10:41.333 "zcopy": true, 00:10:41.333 "get_zone_info": false, 00:10:41.334 "zone_management": false, 00:10:41.334 "zone_append": false, 00:10:41.334 "compare": false, 00:10:41.334 "compare_and_write": false, 00:10:41.334 "abort": true, 00:10:41.334 "seek_hole": false, 00:10:41.334 "seek_data": false, 00:10:41.334 "copy": true, 00:10:41.334 "nvme_iov_md": false 00:10:41.334 }, 00:10:41.334 "memory_domains": [ 00:10:41.334 { 00:10:41.334 "dma_device_id": "system", 00:10:41.334 "dma_device_type": 1 00:10:41.334 }, 00:10:41.334 { 00:10:41.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.334 "dma_device_type": 2 00:10:41.334 } 00:10:41.334 ], 00:10:41.334 "driver_specific": {} 00:10:41.334 } 00:10:41.334 ] 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.334 "name": "Existed_Raid", 00:10:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.334 "strip_size_kb": 64, 00:10:41.334 "state": "configuring", 00:10:41.334 "raid_level": "concat", 00:10:41.334 "superblock": false, 00:10:41.334 "num_base_bdevs": 4, 00:10:41.334 "num_base_bdevs_discovered": 1, 00:10:41.334 "num_base_bdevs_operational": 4, 00:10:41.334 "base_bdevs_list": [ 00:10:41.334 { 00:10:41.334 "name": "BaseBdev1", 00:10:41.334 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:41.334 "is_configured": true, 00:10:41.334 "data_offset": 0, 00:10:41.334 "data_size": 65536 00:10:41.334 }, 00:10:41.334 { 00:10:41.334 "name": "BaseBdev2", 00:10:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.334 "is_configured": false, 00:10:41.334 "data_offset": 0, 00:10:41.334 "data_size": 0 00:10:41.334 }, 00:10:41.334 { 00:10:41.334 "name": "BaseBdev3", 00:10:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.334 "is_configured": false, 00:10:41.334 "data_offset": 0, 00:10:41.334 "data_size": 0 00:10:41.334 }, 00:10:41.334 { 00:10:41.334 "name": "BaseBdev4", 00:10:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.334 "is_configured": false, 00:10:41.334 "data_offset": 0, 00:10:41.334 "data_size": 0 00:10:41.334 } 00:10:41.334 ] 00:10:41.334 }' 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.334 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.902 [2024-12-12 19:39:24.503552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.902 [2024-12-12 19:39:24.503627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.902 [2024-12-12 19:39:24.515564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.902 [2024-12-12 19:39:24.517709] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.902 [2024-12-12 19:39:24.517751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.902 [2024-12-12 19:39:24.517761] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.902 [2024-12-12 19:39:24.517772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.902 [2024-12-12 19:39:24.517778] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.902 [2024-12-12 19:39:24.517787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.902 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.902 "name": "Existed_Raid", 00:10:41.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.902 "strip_size_kb": 64, 00:10:41.902 "state": "configuring", 00:10:41.902 "raid_level": "concat", 00:10:41.902 "superblock": false, 00:10:41.902 "num_base_bdevs": 4, 00:10:41.902 "num_base_bdevs_discovered": 1, 00:10:41.902 "num_base_bdevs_operational": 4, 00:10:41.902 "base_bdevs_list": [ 00:10:41.902 { 00:10:41.902 "name": "BaseBdev1", 00:10:41.902 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:41.902 "is_configured": true, 00:10:41.903 "data_offset": 0, 00:10:41.903 "data_size": 65536 00:10:41.903 }, 00:10:41.903 { 00:10:41.903 "name": "BaseBdev2", 00:10:41.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.903 "is_configured": false, 00:10:41.903 "data_offset": 0, 00:10:41.903 "data_size": 0 00:10:41.903 }, 00:10:41.903 { 00:10:41.903 "name": "BaseBdev3", 00:10:41.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.903 "is_configured": false, 00:10:41.903 "data_offset": 0, 00:10:41.903 "data_size": 0 00:10:41.903 }, 00:10:41.903 { 00:10:41.903 "name": "BaseBdev4", 00:10:41.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.903 "is_configured": false, 00:10:41.903 "data_offset": 0, 00:10:41.903 "data_size": 0 00:10:41.903 } 00:10:41.903 ] 00:10:41.903 }' 00:10:41.903 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.903 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.161 19:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.161 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.161 19:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 [2024-12-12 19:39:25.024181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.421 BaseBdev2 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 [ 00:10:42.421 { 00:10:42.421 "name": "BaseBdev2", 00:10:42.421 "aliases": [ 00:10:42.421 "aa24a495-02dd-4b1f-adf9-4adaad803aac" 00:10:42.421 ], 00:10:42.421 "product_name": "Malloc disk", 00:10:42.421 "block_size": 512, 00:10:42.421 "num_blocks": 65536, 00:10:42.421 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:42.421 "assigned_rate_limits": { 00:10:42.421 "rw_ios_per_sec": 0, 00:10:42.421 "rw_mbytes_per_sec": 0, 00:10:42.421 "r_mbytes_per_sec": 0, 00:10:42.421 "w_mbytes_per_sec": 0 00:10:42.421 }, 00:10:42.421 "claimed": true, 00:10:42.421 "claim_type": "exclusive_write", 00:10:42.421 "zoned": false, 00:10:42.421 "supported_io_types": { 00:10:42.421 "read": true, 00:10:42.421 "write": true, 00:10:42.421 "unmap": true, 00:10:42.421 "flush": true, 00:10:42.421 "reset": true, 00:10:42.421 "nvme_admin": false, 00:10:42.421 "nvme_io": false, 00:10:42.421 "nvme_io_md": false, 00:10:42.421 "write_zeroes": true, 00:10:42.421 "zcopy": true, 00:10:42.421 "get_zone_info": false, 00:10:42.421 "zone_management": false, 00:10:42.421 "zone_append": false, 00:10:42.421 "compare": false, 00:10:42.421 "compare_and_write": false, 00:10:42.421 "abort": true, 00:10:42.421 "seek_hole": false, 00:10:42.421 "seek_data": false, 00:10:42.421 "copy": true, 00:10:42.421 "nvme_iov_md": false 00:10:42.421 }, 00:10:42.421 "memory_domains": [ 00:10:42.421 { 00:10:42.421 "dma_device_id": "system", 00:10:42.421 "dma_device_type": 1 00:10:42.421 }, 00:10:42.421 { 00:10:42.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.421 "dma_device_type": 2 00:10:42.421 } 00:10:42.421 ], 00:10:42.421 "driver_specific": {} 00:10:42.421 } 00:10:42.421 ] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.421 "name": "Existed_Raid", 00:10:42.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.421 "strip_size_kb": 64, 00:10:42.421 "state": "configuring", 00:10:42.421 "raid_level": "concat", 00:10:42.421 "superblock": false, 00:10:42.421 "num_base_bdevs": 4, 00:10:42.421 "num_base_bdevs_discovered": 2, 00:10:42.421 "num_base_bdevs_operational": 4, 00:10:42.421 "base_bdevs_list": [ 00:10:42.421 { 00:10:42.421 "name": "BaseBdev1", 00:10:42.421 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:42.421 "is_configured": true, 00:10:42.421 "data_offset": 0, 00:10:42.421 "data_size": 65536 00:10:42.421 }, 00:10:42.421 { 00:10:42.421 "name": "BaseBdev2", 00:10:42.421 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:42.421 "is_configured": true, 00:10:42.421 "data_offset": 0, 00:10:42.421 "data_size": 65536 00:10:42.421 }, 00:10:42.421 { 00:10:42.421 "name": "BaseBdev3", 00:10:42.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.421 "is_configured": false, 00:10:42.421 "data_offset": 0, 00:10:42.421 "data_size": 0 00:10:42.421 }, 00:10:42.421 { 00:10:42.421 "name": "BaseBdev4", 00:10:42.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.421 "is_configured": false, 00:10:42.421 "data_offset": 0, 00:10:42.421 "data_size": 0 00:10:42.421 } 00:10:42.421 ] 00:10:42.421 }' 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.421 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.682 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.682 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.682 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.941 [2024-12-12 19:39:25.526774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.941 BaseBdev3 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.941 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.941 [ 00:10:42.941 { 00:10:42.941 "name": "BaseBdev3", 00:10:42.941 "aliases": [ 00:10:42.941 "0dbf284c-7c07-4924-8c98-ea0175eabbbd" 00:10:42.941 ], 00:10:42.941 "product_name": "Malloc disk", 00:10:42.941 "block_size": 512, 00:10:42.941 "num_blocks": 65536, 00:10:42.941 "uuid": "0dbf284c-7c07-4924-8c98-ea0175eabbbd", 00:10:42.941 "assigned_rate_limits": { 00:10:42.941 "rw_ios_per_sec": 0, 00:10:42.941 "rw_mbytes_per_sec": 0, 00:10:42.941 "r_mbytes_per_sec": 0, 00:10:42.941 "w_mbytes_per_sec": 0 00:10:42.941 }, 00:10:42.941 "claimed": true, 00:10:42.941 "claim_type": "exclusive_write", 00:10:42.941 "zoned": false, 00:10:42.941 "supported_io_types": { 00:10:42.941 "read": true, 00:10:42.941 "write": true, 00:10:42.941 "unmap": true, 00:10:42.941 "flush": true, 00:10:42.941 "reset": true, 00:10:42.941 "nvme_admin": false, 00:10:42.941 "nvme_io": false, 00:10:42.941 "nvme_io_md": false, 00:10:42.941 "write_zeroes": true, 00:10:42.941 "zcopy": true, 00:10:42.941 "get_zone_info": false, 00:10:42.941 "zone_management": false, 00:10:42.941 "zone_append": false, 00:10:42.942 "compare": false, 00:10:42.942 "compare_and_write": false, 00:10:42.942 "abort": true, 00:10:42.942 "seek_hole": false, 00:10:42.942 "seek_data": false, 00:10:42.942 "copy": true, 00:10:42.942 "nvme_iov_md": false 00:10:42.942 }, 00:10:42.942 "memory_domains": [ 00:10:42.942 { 00:10:42.942 "dma_device_id": "system", 00:10:42.942 "dma_device_type": 1 00:10:42.942 }, 00:10:42.942 { 00:10:42.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.942 "dma_device_type": 2 00:10:42.942 } 00:10:42.942 ], 00:10:42.942 "driver_specific": {} 00:10:42.942 } 00:10:42.942 ] 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.942 "name": "Existed_Raid", 00:10:42.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.942 "strip_size_kb": 64, 00:10:42.942 "state": "configuring", 00:10:42.942 "raid_level": "concat", 00:10:42.942 "superblock": false, 00:10:42.942 "num_base_bdevs": 4, 00:10:42.942 "num_base_bdevs_discovered": 3, 00:10:42.942 "num_base_bdevs_operational": 4, 00:10:42.942 "base_bdevs_list": [ 00:10:42.942 { 00:10:42.942 "name": "BaseBdev1", 00:10:42.942 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:42.942 "is_configured": true, 00:10:42.942 "data_offset": 0, 00:10:42.942 "data_size": 65536 00:10:42.942 }, 00:10:42.942 { 00:10:42.942 "name": "BaseBdev2", 00:10:42.942 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:42.942 "is_configured": true, 00:10:42.942 "data_offset": 0, 00:10:42.942 "data_size": 65536 00:10:42.942 }, 00:10:42.942 { 00:10:42.942 "name": "BaseBdev3", 00:10:42.942 "uuid": "0dbf284c-7c07-4924-8c98-ea0175eabbbd", 00:10:42.942 "is_configured": true, 00:10:42.942 "data_offset": 0, 00:10:42.942 "data_size": 65536 00:10:42.942 }, 00:10:42.942 { 00:10:42.942 "name": "BaseBdev4", 00:10:42.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.942 "is_configured": false, 00:10:42.942 "data_offset": 0, 00:10:42.942 "data_size": 0 00:10:42.942 } 00:10:42.942 ] 00:10:42.942 }' 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.942 19:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.201 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.201 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.201 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 [2024-12-12 19:39:26.062947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.460 [2024-12-12 19:39:26.063006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:43.460 [2024-12-12 19:39:26.063015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:43.460 [2024-12-12 19:39:26.063373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:43.460 [2024-12-12 19:39:26.063595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:43.460 [2024-12-12 19:39:26.063617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:43.460 [2024-12-12 19:39:26.063952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.460 BaseBdev4 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 [ 00:10:43.460 { 00:10:43.460 "name": "BaseBdev4", 00:10:43.460 "aliases": [ 00:10:43.460 "c2cc0c8d-84ce-4a66-b654-5872d2fcdc92" 00:10:43.460 ], 00:10:43.460 "product_name": "Malloc disk", 00:10:43.460 "block_size": 512, 00:10:43.460 "num_blocks": 65536, 00:10:43.460 "uuid": "c2cc0c8d-84ce-4a66-b654-5872d2fcdc92", 00:10:43.460 "assigned_rate_limits": { 00:10:43.460 "rw_ios_per_sec": 0, 00:10:43.460 "rw_mbytes_per_sec": 0, 00:10:43.460 "r_mbytes_per_sec": 0, 00:10:43.460 "w_mbytes_per_sec": 0 00:10:43.460 }, 00:10:43.460 "claimed": true, 00:10:43.460 "claim_type": "exclusive_write", 00:10:43.460 "zoned": false, 00:10:43.460 "supported_io_types": { 00:10:43.460 "read": true, 00:10:43.460 "write": true, 00:10:43.460 "unmap": true, 00:10:43.460 "flush": true, 00:10:43.460 "reset": true, 00:10:43.460 "nvme_admin": false, 00:10:43.460 "nvme_io": false, 00:10:43.460 "nvme_io_md": false, 00:10:43.460 "write_zeroes": true, 00:10:43.460 "zcopy": true, 00:10:43.460 "get_zone_info": false, 00:10:43.460 "zone_management": false, 00:10:43.460 "zone_append": false, 00:10:43.460 "compare": false, 00:10:43.460 "compare_and_write": false, 00:10:43.460 "abort": true, 00:10:43.460 "seek_hole": false, 00:10:43.460 "seek_data": false, 00:10:43.460 "copy": true, 00:10:43.460 "nvme_iov_md": false 00:10:43.460 }, 00:10:43.460 "memory_domains": [ 00:10:43.460 { 00:10:43.460 "dma_device_id": "system", 00:10:43.460 "dma_device_type": 1 00:10:43.460 }, 00:10:43.460 { 00:10:43.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.460 "dma_device_type": 2 00:10:43.460 } 00:10:43.460 ], 00:10:43.460 "driver_specific": {} 00:10:43.460 } 00:10:43.460 ] 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.460 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.461 "name": "Existed_Raid", 00:10:43.461 "uuid": "b9d71712-00cf-47f7-b47d-2b8a368bd81b", 00:10:43.461 "strip_size_kb": 64, 00:10:43.461 "state": "online", 00:10:43.461 "raid_level": "concat", 00:10:43.461 "superblock": false, 00:10:43.461 "num_base_bdevs": 4, 00:10:43.461 "num_base_bdevs_discovered": 4, 00:10:43.461 "num_base_bdevs_operational": 4, 00:10:43.461 "base_bdevs_list": [ 00:10:43.461 { 00:10:43.461 "name": "BaseBdev1", 00:10:43.461 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:43.461 "is_configured": true, 00:10:43.461 "data_offset": 0, 00:10:43.461 "data_size": 65536 00:10:43.461 }, 00:10:43.461 { 00:10:43.461 "name": "BaseBdev2", 00:10:43.461 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:43.461 "is_configured": true, 00:10:43.461 "data_offset": 0, 00:10:43.461 "data_size": 65536 00:10:43.461 }, 00:10:43.461 { 00:10:43.461 "name": "BaseBdev3", 00:10:43.461 "uuid": "0dbf284c-7c07-4924-8c98-ea0175eabbbd", 00:10:43.461 "is_configured": true, 00:10:43.461 "data_offset": 0, 00:10:43.461 "data_size": 65536 00:10:43.461 }, 00:10:43.461 { 00:10:43.461 "name": "BaseBdev4", 00:10:43.461 "uuid": "c2cc0c8d-84ce-4a66-b654-5872d2fcdc92", 00:10:43.461 "is_configured": true, 00:10:43.461 "data_offset": 0, 00:10:43.461 "data_size": 65536 00:10:43.461 } 00:10:43.461 ] 00:10:43.461 }' 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.461 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.720 [2024-12-12 19:39:26.526590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.720 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.720 "name": "Existed_Raid", 00:10:43.720 "aliases": [ 00:10:43.720 "b9d71712-00cf-47f7-b47d-2b8a368bd81b" 00:10:43.720 ], 00:10:43.720 "product_name": "Raid Volume", 00:10:43.720 "block_size": 512, 00:10:43.720 "num_blocks": 262144, 00:10:43.720 "uuid": "b9d71712-00cf-47f7-b47d-2b8a368bd81b", 00:10:43.720 "assigned_rate_limits": { 00:10:43.720 "rw_ios_per_sec": 0, 00:10:43.720 "rw_mbytes_per_sec": 0, 00:10:43.720 "r_mbytes_per_sec": 0, 00:10:43.720 "w_mbytes_per_sec": 0 00:10:43.720 }, 00:10:43.720 "claimed": false, 00:10:43.720 "zoned": false, 00:10:43.720 "supported_io_types": { 00:10:43.720 "read": true, 00:10:43.720 "write": true, 00:10:43.720 "unmap": true, 00:10:43.720 "flush": true, 00:10:43.720 "reset": true, 00:10:43.720 "nvme_admin": false, 00:10:43.720 "nvme_io": false, 00:10:43.720 "nvme_io_md": false, 00:10:43.720 "write_zeroes": true, 00:10:43.720 "zcopy": false, 00:10:43.720 "get_zone_info": false, 00:10:43.720 "zone_management": false, 00:10:43.720 "zone_append": false, 00:10:43.720 "compare": false, 00:10:43.720 "compare_and_write": false, 00:10:43.720 "abort": false, 00:10:43.720 "seek_hole": false, 00:10:43.720 "seek_data": false, 00:10:43.720 "copy": false, 00:10:43.720 "nvme_iov_md": false 00:10:43.720 }, 00:10:43.720 "memory_domains": [ 00:10:43.720 { 00:10:43.720 "dma_device_id": "system", 00:10:43.720 "dma_device_type": 1 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.720 "dma_device_type": 2 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "system", 00:10:43.720 "dma_device_type": 1 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.720 "dma_device_type": 2 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "system", 00:10:43.720 "dma_device_type": 1 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.720 "dma_device_type": 2 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "system", 00:10:43.720 "dma_device_type": 1 00:10:43.720 }, 00:10:43.720 { 00:10:43.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.720 "dma_device_type": 2 00:10:43.720 } 00:10:43.720 ], 00:10:43.720 "driver_specific": { 00:10:43.720 "raid": { 00:10:43.720 "uuid": "b9d71712-00cf-47f7-b47d-2b8a368bd81b", 00:10:43.720 "strip_size_kb": 64, 00:10:43.720 "state": "online", 00:10:43.720 "raid_level": "concat", 00:10:43.720 "superblock": false, 00:10:43.720 "num_base_bdevs": 4, 00:10:43.720 "num_base_bdevs_discovered": 4, 00:10:43.720 "num_base_bdevs_operational": 4, 00:10:43.720 "base_bdevs_list": [ 00:10:43.720 { 00:10:43.720 "name": "BaseBdev1", 00:10:43.720 "uuid": "54a619ba-2e0b-4613-96ef-b237a5b69b11", 00:10:43.720 "is_configured": true, 00:10:43.720 "data_offset": 0, 00:10:43.720 "data_size": 65536 00:10:43.721 }, 00:10:43.721 { 00:10:43.721 "name": "BaseBdev2", 00:10:43.721 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:43.721 "is_configured": true, 00:10:43.721 "data_offset": 0, 00:10:43.721 "data_size": 65536 00:10:43.721 }, 00:10:43.721 { 00:10:43.721 "name": "BaseBdev3", 00:10:43.721 "uuid": "0dbf284c-7c07-4924-8c98-ea0175eabbbd", 00:10:43.721 "is_configured": true, 00:10:43.721 "data_offset": 0, 00:10:43.721 "data_size": 65536 00:10:43.721 }, 00:10:43.721 { 00:10:43.721 "name": "BaseBdev4", 00:10:43.721 "uuid": "c2cc0c8d-84ce-4a66-b654-5872d2fcdc92", 00:10:43.721 "is_configured": true, 00:10:43.721 "data_offset": 0, 00:10:43.721 "data_size": 65536 00:10:43.721 } 00:10:43.721 ] 00:10:43.721 } 00:10:43.721 } 00:10:43.721 }' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:43.980 BaseBdev2 00:10:43.980 BaseBdev3 00:10:43.980 BaseBdev4' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.980 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.980 [2024-12-12 19:39:26.785970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.980 [2024-12-12 19:39:26.786008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.980 [2024-12-12 19:39:26.786068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.239 "name": "Existed_Raid", 00:10:44.239 "uuid": "b9d71712-00cf-47f7-b47d-2b8a368bd81b", 00:10:44.239 "strip_size_kb": 64, 00:10:44.239 "state": "offline", 00:10:44.239 "raid_level": "concat", 00:10:44.239 "superblock": false, 00:10:44.239 "num_base_bdevs": 4, 00:10:44.239 "num_base_bdevs_discovered": 3, 00:10:44.239 "num_base_bdevs_operational": 3, 00:10:44.239 "base_bdevs_list": [ 00:10:44.239 { 00:10:44.239 "name": null, 00:10:44.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.239 "is_configured": false, 00:10:44.239 "data_offset": 0, 00:10:44.239 "data_size": 65536 00:10:44.239 }, 00:10:44.239 { 00:10:44.239 "name": "BaseBdev2", 00:10:44.239 "uuid": "aa24a495-02dd-4b1f-adf9-4adaad803aac", 00:10:44.239 "is_configured": true, 00:10:44.239 "data_offset": 0, 00:10:44.239 "data_size": 65536 00:10:44.239 }, 00:10:44.239 { 00:10:44.239 "name": "BaseBdev3", 00:10:44.239 "uuid": "0dbf284c-7c07-4924-8c98-ea0175eabbbd", 00:10:44.239 "is_configured": true, 00:10:44.239 "data_offset": 0, 00:10:44.239 "data_size": 65536 00:10:44.239 }, 00:10:44.239 { 00:10:44.239 "name": "BaseBdev4", 00:10:44.239 "uuid": "c2cc0c8d-84ce-4a66-b654-5872d2fcdc92", 00:10:44.239 "is_configured": true, 00:10:44.239 "data_offset": 0, 00:10:44.239 "data_size": 65536 00:10:44.239 } 00:10:44.239 ] 00:10:44.239 }' 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.239 19:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.498 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.757 [2024-12-12 19:39:27.377308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.757 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.757 [2024-12-12 19:39:27.538272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.016 [2024-12-12 19:39:27.695706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:45.016 [2024-12-12 19:39:27.695783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.016 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 BaseBdev2 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 [ 00:10:45.276 { 00:10:45.276 "name": "BaseBdev2", 00:10:45.276 "aliases": [ 00:10:45.276 "2ed11b22-5e16-4b6a-b30e-6754bfa68d89" 00:10:45.276 ], 00:10:45.276 "product_name": "Malloc disk", 00:10:45.276 "block_size": 512, 00:10:45.276 "num_blocks": 65536, 00:10:45.276 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:45.276 "assigned_rate_limits": { 00:10:45.276 "rw_ios_per_sec": 0, 00:10:45.276 "rw_mbytes_per_sec": 0, 00:10:45.276 "r_mbytes_per_sec": 0, 00:10:45.276 "w_mbytes_per_sec": 0 00:10:45.276 }, 00:10:45.276 "claimed": false, 00:10:45.276 "zoned": false, 00:10:45.276 "supported_io_types": { 00:10:45.276 "read": true, 00:10:45.276 "write": true, 00:10:45.276 "unmap": true, 00:10:45.276 "flush": true, 00:10:45.276 "reset": true, 00:10:45.276 "nvme_admin": false, 00:10:45.276 "nvme_io": false, 00:10:45.276 "nvme_io_md": false, 00:10:45.276 "write_zeroes": true, 00:10:45.276 "zcopy": true, 00:10:45.276 "get_zone_info": false, 00:10:45.276 "zone_management": false, 00:10:45.276 "zone_append": false, 00:10:45.276 "compare": false, 00:10:45.276 "compare_and_write": false, 00:10:45.276 "abort": true, 00:10:45.276 "seek_hole": false, 00:10:45.276 "seek_data": false, 00:10:45.276 "copy": true, 00:10:45.276 "nvme_iov_md": false 00:10:45.276 }, 00:10:45.276 "memory_domains": [ 00:10:45.276 { 00:10:45.276 "dma_device_id": "system", 00:10:45.276 "dma_device_type": 1 00:10:45.276 }, 00:10:45.276 { 00:10:45.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.276 "dma_device_type": 2 00:10:45.276 } 00:10:45.276 ], 00:10:45.276 "driver_specific": {} 00:10:45.276 } 00:10:45.276 ] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 BaseBdev3 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.276 19:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.276 [ 00:10:45.276 { 00:10:45.276 "name": "BaseBdev3", 00:10:45.276 "aliases": [ 00:10:45.276 "a776484a-9b9a-4b2d-b546-8cd2c519d15f" 00:10:45.276 ], 00:10:45.276 "product_name": "Malloc disk", 00:10:45.276 "block_size": 512, 00:10:45.276 "num_blocks": 65536, 00:10:45.276 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:45.276 "assigned_rate_limits": { 00:10:45.276 "rw_ios_per_sec": 0, 00:10:45.276 "rw_mbytes_per_sec": 0, 00:10:45.277 "r_mbytes_per_sec": 0, 00:10:45.277 "w_mbytes_per_sec": 0 00:10:45.277 }, 00:10:45.277 "claimed": false, 00:10:45.277 "zoned": false, 00:10:45.277 "supported_io_types": { 00:10:45.277 "read": true, 00:10:45.277 "write": true, 00:10:45.277 "unmap": true, 00:10:45.277 "flush": true, 00:10:45.277 "reset": true, 00:10:45.277 "nvme_admin": false, 00:10:45.277 "nvme_io": false, 00:10:45.277 "nvme_io_md": false, 00:10:45.277 "write_zeroes": true, 00:10:45.277 "zcopy": true, 00:10:45.277 "get_zone_info": false, 00:10:45.277 "zone_management": false, 00:10:45.277 "zone_append": false, 00:10:45.277 "compare": false, 00:10:45.277 "compare_and_write": false, 00:10:45.277 "abort": true, 00:10:45.277 "seek_hole": false, 00:10:45.277 "seek_data": false, 00:10:45.277 "copy": true, 00:10:45.277 "nvme_iov_md": false 00:10:45.277 }, 00:10:45.277 "memory_domains": [ 00:10:45.277 { 00:10:45.277 "dma_device_id": "system", 00:10:45.277 "dma_device_type": 1 00:10:45.277 }, 00:10:45.277 { 00:10:45.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.277 "dma_device_type": 2 00:10:45.277 } 00:10:45.277 ], 00:10:45.277 "driver_specific": {} 00:10:45.277 } 00:10:45.277 ] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.277 BaseBdev4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.277 [ 00:10:45.277 { 00:10:45.277 "name": "BaseBdev4", 00:10:45.277 "aliases": [ 00:10:45.277 "ab7f1c3d-9f91-40c3-b25e-718203855755" 00:10:45.277 ], 00:10:45.277 "product_name": "Malloc disk", 00:10:45.277 "block_size": 512, 00:10:45.277 "num_blocks": 65536, 00:10:45.277 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:45.277 "assigned_rate_limits": { 00:10:45.277 "rw_ios_per_sec": 0, 00:10:45.277 "rw_mbytes_per_sec": 0, 00:10:45.277 "r_mbytes_per_sec": 0, 00:10:45.277 "w_mbytes_per_sec": 0 00:10:45.277 }, 00:10:45.277 "claimed": false, 00:10:45.277 "zoned": false, 00:10:45.277 "supported_io_types": { 00:10:45.277 "read": true, 00:10:45.277 "write": true, 00:10:45.277 "unmap": true, 00:10:45.277 "flush": true, 00:10:45.277 "reset": true, 00:10:45.277 "nvme_admin": false, 00:10:45.277 "nvme_io": false, 00:10:45.277 "nvme_io_md": false, 00:10:45.277 "write_zeroes": true, 00:10:45.277 "zcopy": true, 00:10:45.277 "get_zone_info": false, 00:10:45.277 "zone_management": false, 00:10:45.277 "zone_append": false, 00:10:45.277 "compare": false, 00:10:45.277 "compare_and_write": false, 00:10:45.277 "abort": true, 00:10:45.277 "seek_hole": false, 00:10:45.277 "seek_data": false, 00:10:45.277 "copy": true, 00:10:45.277 "nvme_iov_md": false 00:10:45.277 }, 00:10:45.277 "memory_domains": [ 00:10:45.277 { 00:10:45.277 "dma_device_id": "system", 00:10:45.277 "dma_device_type": 1 00:10:45.277 }, 00:10:45.277 { 00:10:45.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.277 "dma_device_type": 2 00:10:45.277 } 00:10:45.277 ], 00:10:45.277 "driver_specific": {} 00:10:45.277 } 00:10:45.277 ] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.277 [2024-12-12 19:39:28.109405] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.277 [2024-12-12 19:39:28.109459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.277 [2024-12-12 19:39:28.109482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.277 [2024-12-12 19:39:28.111610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.277 [2024-12-12 19:39:28.111665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.277 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.536 "name": "Existed_Raid", 00:10:45.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.536 "strip_size_kb": 64, 00:10:45.536 "state": "configuring", 00:10:45.536 "raid_level": "concat", 00:10:45.536 "superblock": false, 00:10:45.536 "num_base_bdevs": 4, 00:10:45.536 "num_base_bdevs_discovered": 3, 00:10:45.536 "num_base_bdevs_operational": 4, 00:10:45.536 "base_bdevs_list": [ 00:10:45.536 { 00:10:45.536 "name": "BaseBdev1", 00:10:45.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.536 "is_configured": false, 00:10:45.536 "data_offset": 0, 00:10:45.536 "data_size": 0 00:10:45.536 }, 00:10:45.536 { 00:10:45.536 "name": "BaseBdev2", 00:10:45.536 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:45.536 "is_configured": true, 00:10:45.536 "data_offset": 0, 00:10:45.536 "data_size": 65536 00:10:45.536 }, 00:10:45.536 { 00:10:45.536 "name": "BaseBdev3", 00:10:45.536 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:45.536 "is_configured": true, 00:10:45.536 "data_offset": 0, 00:10:45.536 "data_size": 65536 00:10:45.536 }, 00:10:45.536 { 00:10:45.536 "name": "BaseBdev4", 00:10:45.536 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:45.536 "is_configured": true, 00:10:45.536 "data_offset": 0, 00:10:45.536 "data_size": 65536 00:10:45.536 } 00:10:45.536 ] 00:10:45.536 }' 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.536 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.794 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.794 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.794 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.795 [2024-12-12 19:39:28.540753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.795 "name": "Existed_Raid", 00:10:45.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.795 "strip_size_kb": 64, 00:10:45.795 "state": "configuring", 00:10:45.795 "raid_level": "concat", 00:10:45.795 "superblock": false, 00:10:45.795 "num_base_bdevs": 4, 00:10:45.795 "num_base_bdevs_discovered": 2, 00:10:45.795 "num_base_bdevs_operational": 4, 00:10:45.795 "base_bdevs_list": [ 00:10:45.795 { 00:10:45.795 "name": "BaseBdev1", 00:10:45.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.795 "is_configured": false, 00:10:45.795 "data_offset": 0, 00:10:45.795 "data_size": 0 00:10:45.795 }, 00:10:45.795 { 00:10:45.795 "name": null, 00:10:45.795 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:45.795 "is_configured": false, 00:10:45.795 "data_offset": 0, 00:10:45.795 "data_size": 65536 00:10:45.795 }, 00:10:45.795 { 00:10:45.795 "name": "BaseBdev3", 00:10:45.795 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:45.795 "is_configured": true, 00:10:45.795 "data_offset": 0, 00:10:45.795 "data_size": 65536 00:10:45.795 }, 00:10:45.795 { 00:10:45.795 "name": "BaseBdev4", 00:10:45.795 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:45.795 "is_configured": true, 00:10:45.795 "data_offset": 0, 00:10:45.795 "data_size": 65536 00:10:45.795 } 00:10:45.795 ] 00:10:45.795 }' 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.795 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.363 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.363 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 19:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.363 19:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 [2024-12-12 19:39:29.053103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.363 BaseBdev1 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 [ 00:10:46.363 { 00:10:46.363 "name": "BaseBdev1", 00:10:46.363 "aliases": [ 00:10:46.363 "cca2663c-5931-484c-a8eb-a2cbad05e3be" 00:10:46.363 ], 00:10:46.363 "product_name": "Malloc disk", 00:10:46.363 "block_size": 512, 00:10:46.363 "num_blocks": 65536, 00:10:46.363 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:46.363 "assigned_rate_limits": { 00:10:46.363 "rw_ios_per_sec": 0, 00:10:46.363 "rw_mbytes_per_sec": 0, 00:10:46.363 "r_mbytes_per_sec": 0, 00:10:46.363 "w_mbytes_per_sec": 0 00:10:46.363 }, 00:10:46.363 "claimed": true, 00:10:46.363 "claim_type": "exclusive_write", 00:10:46.363 "zoned": false, 00:10:46.363 "supported_io_types": { 00:10:46.363 "read": true, 00:10:46.363 "write": true, 00:10:46.363 "unmap": true, 00:10:46.363 "flush": true, 00:10:46.363 "reset": true, 00:10:46.363 "nvme_admin": false, 00:10:46.363 "nvme_io": false, 00:10:46.363 "nvme_io_md": false, 00:10:46.363 "write_zeroes": true, 00:10:46.363 "zcopy": true, 00:10:46.363 "get_zone_info": false, 00:10:46.363 "zone_management": false, 00:10:46.363 "zone_append": false, 00:10:46.363 "compare": false, 00:10:46.363 "compare_and_write": false, 00:10:46.363 "abort": true, 00:10:46.363 "seek_hole": false, 00:10:46.363 "seek_data": false, 00:10:46.363 "copy": true, 00:10:46.363 "nvme_iov_md": false 00:10:46.363 }, 00:10:46.363 "memory_domains": [ 00:10:46.363 { 00:10:46.363 "dma_device_id": "system", 00:10:46.363 "dma_device_type": 1 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.363 "dma_device_type": 2 00:10:46.363 } 00:10:46.363 ], 00:10:46.363 "driver_specific": {} 00:10:46.363 } 00:10:46.363 ] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.363 "name": "Existed_Raid", 00:10:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.363 "strip_size_kb": 64, 00:10:46.363 "state": "configuring", 00:10:46.363 "raid_level": "concat", 00:10:46.363 "superblock": false, 00:10:46.363 "num_base_bdevs": 4, 00:10:46.363 "num_base_bdevs_discovered": 3, 00:10:46.363 "num_base_bdevs_operational": 4, 00:10:46.363 "base_bdevs_list": [ 00:10:46.363 { 00:10:46.363 "name": "BaseBdev1", 00:10:46.363 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:46.363 "is_configured": true, 00:10:46.363 "data_offset": 0, 00:10:46.363 "data_size": 65536 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "name": null, 00:10:46.363 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:46.363 "is_configured": false, 00:10:46.363 "data_offset": 0, 00:10:46.363 "data_size": 65536 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "name": "BaseBdev3", 00:10:46.363 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:46.363 "is_configured": true, 00:10:46.363 "data_offset": 0, 00:10:46.363 "data_size": 65536 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "name": "BaseBdev4", 00:10:46.363 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:46.363 "is_configured": true, 00:10:46.363 "data_offset": 0, 00:10:46.363 "data_size": 65536 00:10:46.363 } 00:10:46.363 ] 00:10:46.363 }' 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.363 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.932 [2024-12-12 19:39:29.596304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.932 "name": "Existed_Raid", 00:10:46.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.932 "strip_size_kb": 64, 00:10:46.932 "state": "configuring", 00:10:46.932 "raid_level": "concat", 00:10:46.932 "superblock": false, 00:10:46.932 "num_base_bdevs": 4, 00:10:46.932 "num_base_bdevs_discovered": 2, 00:10:46.932 "num_base_bdevs_operational": 4, 00:10:46.932 "base_bdevs_list": [ 00:10:46.932 { 00:10:46.932 "name": "BaseBdev1", 00:10:46.932 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:46.932 "is_configured": true, 00:10:46.932 "data_offset": 0, 00:10:46.932 "data_size": 65536 00:10:46.932 }, 00:10:46.932 { 00:10:46.932 "name": null, 00:10:46.932 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:46.932 "is_configured": false, 00:10:46.932 "data_offset": 0, 00:10:46.932 "data_size": 65536 00:10:46.932 }, 00:10:46.932 { 00:10:46.932 "name": null, 00:10:46.932 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:46.932 "is_configured": false, 00:10:46.932 "data_offset": 0, 00:10:46.932 "data_size": 65536 00:10:46.932 }, 00:10:46.932 { 00:10:46.932 "name": "BaseBdev4", 00:10:46.932 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:46.932 "is_configured": true, 00:10:46.932 "data_offset": 0, 00:10:46.932 "data_size": 65536 00:10:46.932 } 00:10:46.932 ] 00:10:46.932 }' 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.932 19:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.501 [2024-12-12 19:39:30.091484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.501 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.501 "name": "Existed_Raid", 00:10:47.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.501 "strip_size_kb": 64, 00:10:47.501 "state": "configuring", 00:10:47.501 "raid_level": "concat", 00:10:47.501 "superblock": false, 00:10:47.501 "num_base_bdevs": 4, 00:10:47.501 "num_base_bdevs_discovered": 3, 00:10:47.501 "num_base_bdevs_operational": 4, 00:10:47.501 "base_bdevs_list": [ 00:10:47.501 { 00:10:47.501 "name": "BaseBdev1", 00:10:47.501 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:47.501 "is_configured": true, 00:10:47.501 "data_offset": 0, 00:10:47.501 "data_size": 65536 00:10:47.501 }, 00:10:47.501 { 00:10:47.501 "name": null, 00:10:47.501 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:47.501 "is_configured": false, 00:10:47.501 "data_offset": 0, 00:10:47.501 "data_size": 65536 00:10:47.501 }, 00:10:47.501 { 00:10:47.501 "name": "BaseBdev3", 00:10:47.501 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:47.501 "is_configured": true, 00:10:47.502 "data_offset": 0, 00:10:47.502 "data_size": 65536 00:10:47.502 }, 00:10:47.502 { 00:10:47.502 "name": "BaseBdev4", 00:10:47.502 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:47.502 "is_configured": true, 00:10:47.502 "data_offset": 0, 00:10:47.502 "data_size": 65536 00:10:47.502 } 00:10:47.502 ] 00:10:47.502 }' 00:10:47.502 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.502 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.761 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.761 [2024-12-12 19:39:30.546817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.020 "name": "Existed_Raid", 00:10:48.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.020 "strip_size_kb": 64, 00:10:48.020 "state": "configuring", 00:10:48.020 "raid_level": "concat", 00:10:48.020 "superblock": false, 00:10:48.020 "num_base_bdevs": 4, 00:10:48.020 "num_base_bdevs_discovered": 2, 00:10:48.020 "num_base_bdevs_operational": 4, 00:10:48.020 "base_bdevs_list": [ 00:10:48.020 { 00:10:48.020 "name": null, 00:10:48.020 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:48.020 "is_configured": false, 00:10:48.020 "data_offset": 0, 00:10:48.020 "data_size": 65536 00:10:48.020 }, 00:10:48.020 { 00:10:48.020 "name": null, 00:10:48.020 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:48.020 "is_configured": false, 00:10:48.020 "data_offset": 0, 00:10:48.020 "data_size": 65536 00:10:48.020 }, 00:10:48.020 { 00:10:48.020 "name": "BaseBdev3", 00:10:48.020 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:48.020 "is_configured": true, 00:10:48.020 "data_offset": 0, 00:10:48.020 "data_size": 65536 00:10:48.020 }, 00:10:48.020 { 00:10:48.020 "name": "BaseBdev4", 00:10:48.020 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:48.020 "is_configured": true, 00:10:48.020 "data_offset": 0, 00:10:48.020 "data_size": 65536 00:10:48.020 } 00:10:48.020 ] 00:10:48.020 }' 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.020 19:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.280 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.280 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.280 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.636 [2024-12-12 19:39:31.163265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.636 "name": "Existed_Raid", 00:10:48.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.636 "strip_size_kb": 64, 00:10:48.636 "state": "configuring", 00:10:48.636 "raid_level": "concat", 00:10:48.636 "superblock": false, 00:10:48.636 "num_base_bdevs": 4, 00:10:48.636 "num_base_bdevs_discovered": 3, 00:10:48.636 "num_base_bdevs_operational": 4, 00:10:48.636 "base_bdevs_list": [ 00:10:48.636 { 00:10:48.636 "name": null, 00:10:48.636 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:48.636 "is_configured": false, 00:10:48.636 "data_offset": 0, 00:10:48.636 "data_size": 65536 00:10:48.636 }, 00:10:48.636 { 00:10:48.636 "name": "BaseBdev2", 00:10:48.636 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:48.636 "is_configured": true, 00:10:48.636 "data_offset": 0, 00:10:48.636 "data_size": 65536 00:10:48.636 }, 00:10:48.636 { 00:10:48.636 "name": "BaseBdev3", 00:10:48.636 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:48.636 "is_configured": true, 00:10:48.636 "data_offset": 0, 00:10:48.636 "data_size": 65536 00:10:48.636 }, 00:10:48.636 { 00:10:48.636 "name": "BaseBdev4", 00:10:48.636 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:48.636 "is_configured": true, 00:10:48.636 "data_offset": 0, 00:10:48.636 "data_size": 65536 00:10:48.636 } 00:10:48.636 ] 00:10:48.636 }' 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.636 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cca2663c-5931-484c-a8eb-a2cbad05e3be 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.922 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.181 [2024-12-12 19:39:31.780980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:49.181 [2024-12-12 19:39:31.781043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:49.181 [2024-12-12 19:39:31.781052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:49.181 [2024-12-12 19:39:31.781372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:49.181 [2024-12-12 19:39:31.781606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:49.181 [2024-12-12 19:39:31.781626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:49.181 [2024-12-12 19:39:31.781958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.181 NewBaseBdev 00:10:49.181 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 [ 00:10:49.182 { 00:10:49.182 "name": "NewBaseBdev", 00:10:49.182 "aliases": [ 00:10:49.182 "cca2663c-5931-484c-a8eb-a2cbad05e3be" 00:10:49.182 ], 00:10:49.182 "product_name": "Malloc disk", 00:10:49.182 "block_size": 512, 00:10:49.182 "num_blocks": 65536, 00:10:49.182 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:49.182 "assigned_rate_limits": { 00:10:49.182 "rw_ios_per_sec": 0, 00:10:49.182 "rw_mbytes_per_sec": 0, 00:10:49.182 "r_mbytes_per_sec": 0, 00:10:49.182 "w_mbytes_per_sec": 0 00:10:49.182 }, 00:10:49.182 "claimed": true, 00:10:49.182 "claim_type": "exclusive_write", 00:10:49.182 "zoned": false, 00:10:49.182 "supported_io_types": { 00:10:49.182 "read": true, 00:10:49.182 "write": true, 00:10:49.182 "unmap": true, 00:10:49.182 "flush": true, 00:10:49.182 "reset": true, 00:10:49.182 "nvme_admin": false, 00:10:49.182 "nvme_io": false, 00:10:49.182 "nvme_io_md": false, 00:10:49.182 "write_zeroes": true, 00:10:49.182 "zcopy": true, 00:10:49.182 "get_zone_info": false, 00:10:49.182 "zone_management": false, 00:10:49.182 "zone_append": false, 00:10:49.182 "compare": false, 00:10:49.182 "compare_and_write": false, 00:10:49.182 "abort": true, 00:10:49.182 "seek_hole": false, 00:10:49.182 "seek_data": false, 00:10:49.182 "copy": true, 00:10:49.182 "nvme_iov_md": false 00:10:49.182 }, 00:10:49.182 "memory_domains": [ 00:10:49.182 { 00:10:49.182 "dma_device_id": "system", 00:10:49.182 "dma_device_type": 1 00:10:49.182 }, 00:10:49.182 { 00:10:49.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.182 "dma_device_type": 2 00:10:49.182 } 00:10:49.182 ], 00:10:49.182 "driver_specific": {} 00:10:49.182 } 00:10:49.182 ] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.182 "name": "Existed_Raid", 00:10:49.182 "uuid": "4d417e61-189a-4fc6-9a07-09055c67ac9b", 00:10:49.182 "strip_size_kb": 64, 00:10:49.182 "state": "online", 00:10:49.182 "raid_level": "concat", 00:10:49.182 "superblock": false, 00:10:49.182 "num_base_bdevs": 4, 00:10:49.182 "num_base_bdevs_discovered": 4, 00:10:49.182 "num_base_bdevs_operational": 4, 00:10:49.182 "base_bdevs_list": [ 00:10:49.182 { 00:10:49.182 "name": "NewBaseBdev", 00:10:49.182 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:49.182 "is_configured": true, 00:10:49.182 "data_offset": 0, 00:10:49.182 "data_size": 65536 00:10:49.182 }, 00:10:49.182 { 00:10:49.182 "name": "BaseBdev2", 00:10:49.182 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:49.182 "is_configured": true, 00:10:49.182 "data_offset": 0, 00:10:49.182 "data_size": 65536 00:10:49.182 }, 00:10:49.182 { 00:10:49.182 "name": "BaseBdev3", 00:10:49.182 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:49.182 "is_configured": true, 00:10:49.182 "data_offset": 0, 00:10:49.182 "data_size": 65536 00:10:49.182 }, 00:10:49.182 { 00:10:49.182 "name": "BaseBdev4", 00:10:49.182 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:49.182 "is_configured": true, 00:10:49.182 "data_offset": 0, 00:10:49.182 "data_size": 65536 00:10:49.182 } 00:10:49.182 ] 00:10:49.182 }' 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.182 19:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.441 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.441 [2024-12-12 19:39:32.248726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.442 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.442 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.442 "name": "Existed_Raid", 00:10:49.442 "aliases": [ 00:10:49.442 "4d417e61-189a-4fc6-9a07-09055c67ac9b" 00:10:49.442 ], 00:10:49.442 "product_name": "Raid Volume", 00:10:49.442 "block_size": 512, 00:10:49.442 "num_blocks": 262144, 00:10:49.442 "uuid": "4d417e61-189a-4fc6-9a07-09055c67ac9b", 00:10:49.442 "assigned_rate_limits": { 00:10:49.442 "rw_ios_per_sec": 0, 00:10:49.442 "rw_mbytes_per_sec": 0, 00:10:49.442 "r_mbytes_per_sec": 0, 00:10:49.442 "w_mbytes_per_sec": 0 00:10:49.442 }, 00:10:49.442 "claimed": false, 00:10:49.442 "zoned": false, 00:10:49.442 "supported_io_types": { 00:10:49.442 "read": true, 00:10:49.442 "write": true, 00:10:49.442 "unmap": true, 00:10:49.442 "flush": true, 00:10:49.442 "reset": true, 00:10:49.442 "nvme_admin": false, 00:10:49.442 "nvme_io": false, 00:10:49.442 "nvme_io_md": false, 00:10:49.442 "write_zeroes": true, 00:10:49.442 "zcopy": false, 00:10:49.442 "get_zone_info": false, 00:10:49.442 "zone_management": false, 00:10:49.442 "zone_append": false, 00:10:49.442 "compare": false, 00:10:49.442 "compare_and_write": false, 00:10:49.442 "abort": false, 00:10:49.442 "seek_hole": false, 00:10:49.442 "seek_data": false, 00:10:49.442 "copy": false, 00:10:49.442 "nvme_iov_md": false 00:10:49.442 }, 00:10:49.442 "memory_domains": [ 00:10:49.442 { 00:10:49.442 "dma_device_id": "system", 00:10:49.442 "dma_device_type": 1 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.442 "dma_device_type": 2 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "system", 00:10:49.442 "dma_device_type": 1 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.442 "dma_device_type": 2 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "system", 00:10:49.442 "dma_device_type": 1 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.442 "dma_device_type": 2 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "system", 00:10:49.442 "dma_device_type": 1 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.442 "dma_device_type": 2 00:10:49.442 } 00:10:49.442 ], 00:10:49.442 "driver_specific": { 00:10:49.442 "raid": { 00:10:49.442 "uuid": "4d417e61-189a-4fc6-9a07-09055c67ac9b", 00:10:49.442 "strip_size_kb": 64, 00:10:49.442 "state": "online", 00:10:49.442 "raid_level": "concat", 00:10:49.442 "superblock": false, 00:10:49.442 "num_base_bdevs": 4, 00:10:49.442 "num_base_bdevs_discovered": 4, 00:10:49.442 "num_base_bdevs_operational": 4, 00:10:49.442 "base_bdevs_list": [ 00:10:49.442 { 00:10:49.442 "name": "NewBaseBdev", 00:10:49.442 "uuid": "cca2663c-5931-484c-a8eb-a2cbad05e3be", 00:10:49.442 "is_configured": true, 00:10:49.442 "data_offset": 0, 00:10:49.442 "data_size": 65536 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "name": "BaseBdev2", 00:10:49.442 "uuid": "2ed11b22-5e16-4b6a-b30e-6754bfa68d89", 00:10:49.442 "is_configured": true, 00:10:49.442 "data_offset": 0, 00:10:49.442 "data_size": 65536 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "name": "BaseBdev3", 00:10:49.442 "uuid": "a776484a-9b9a-4b2d-b546-8cd2c519d15f", 00:10:49.442 "is_configured": true, 00:10:49.442 "data_offset": 0, 00:10:49.442 "data_size": 65536 00:10:49.442 }, 00:10:49.442 { 00:10:49.442 "name": "BaseBdev4", 00:10:49.442 "uuid": "ab7f1c3d-9f91-40c3-b25e-718203855755", 00:10:49.442 "is_configured": true, 00:10:49.442 "data_offset": 0, 00:10:49.442 "data_size": 65536 00:10:49.442 } 00:10:49.442 ] 00:10:49.442 } 00:10:49.442 } 00:10:49.442 }' 00:10:49.442 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.701 BaseBdev2 00:10:49.701 BaseBdev3 00:10:49.701 BaseBdev4' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.701 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.961 [2024-12-12 19:39:32.551722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.961 [2024-12-12 19:39:32.551757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.961 [2024-12-12 19:39:32.551841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.961 [2024-12-12 19:39:32.551921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.961 [2024-12-12 19:39:32.551937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72976 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72976 ']' 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72976 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72976 00:10:49.961 killing process with pid 72976 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72976' 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72976 00:10:49.961 [2024-12-12 19:39:32.591991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.961 19:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72976 00:10:50.221 [2024-12-12 19:39:33.019221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:51.602 00:10:51.602 real 0m11.676s 00:10:51.602 user 0m18.267s 00:10:51.602 sys 0m2.183s 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 ************************************ 00:10:51.602 END TEST raid_state_function_test 00:10:51.602 ************************************ 00:10:51.602 19:39:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:51.602 19:39:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.602 19:39:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.602 19:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 ************************************ 00:10:51.602 START TEST raid_state_function_test_sb 00:10:51.602 ************************************ 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73647 00:10:51.602 Process raid pid: 73647 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73647' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73647 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73647 ']' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.602 19:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 [2024-12-12 19:39:34.432300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:51.602 [2024-12-12 19:39:34.432477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.861 [2024-12-12 19:39:34.600449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.121 [2024-12-12 19:39:34.728204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.386 [2024-12-12 19:39:34.965992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.386 [2024-12-12 19:39:34.966055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.649 [2024-12-12 19:39:35.242834] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.649 [2024-12-12 19:39:35.242896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.649 [2024-12-12 19:39:35.242906] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.649 [2024-12-12 19:39:35.242916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.649 [2024-12-12 19:39:35.242922] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.649 [2024-12-12 19:39:35.242931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.649 [2024-12-12 19:39:35.242943] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.649 [2024-12-12 19:39:35.242952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.649 "name": "Existed_Raid", 00:10:52.649 "uuid": "7f6e817e-0a7b-4226-8d6b-a3a2cb407cab", 00:10:52.649 "strip_size_kb": 64, 00:10:52.649 "state": "configuring", 00:10:52.649 "raid_level": "concat", 00:10:52.649 "superblock": true, 00:10:52.649 "num_base_bdevs": 4, 00:10:52.649 "num_base_bdevs_discovered": 0, 00:10:52.649 "num_base_bdevs_operational": 4, 00:10:52.649 "base_bdevs_list": [ 00:10:52.649 { 00:10:52.649 "name": "BaseBdev1", 00:10:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.649 "is_configured": false, 00:10:52.649 "data_offset": 0, 00:10:52.649 "data_size": 0 00:10:52.649 }, 00:10:52.649 { 00:10:52.649 "name": "BaseBdev2", 00:10:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.649 "is_configured": false, 00:10:52.649 "data_offset": 0, 00:10:52.649 "data_size": 0 00:10:52.649 }, 00:10:52.649 { 00:10:52.649 "name": "BaseBdev3", 00:10:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.649 "is_configured": false, 00:10:52.649 "data_offset": 0, 00:10:52.649 "data_size": 0 00:10:52.649 }, 00:10:52.649 { 00:10:52.649 "name": "BaseBdev4", 00:10:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.649 "is_configured": false, 00:10:52.649 "data_offset": 0, 00:10:52.649 "data_size": 0 00:10:52.649 } 00:10:52.649 ] 00:10:52.649 }' 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.649 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.908 [2024-12-12 19:39:35.590386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.908 [2024-12-12 19:39:35.590448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.908 [2024-12-12 19:39:35.598365] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.908 [2024-12-12 19:39:35.598414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.908 [2024-12-12 19:39:35.598424] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.908 [2024-12-12 19:39:35.598434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.908 [2024-12-12 19:39:35.598440] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.908 [2024-12-12 19:39:35.598450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.908 [2024-12-12 19:39:35.598456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.908 [2024-12-12 19:39:35.598465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.908 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.909 [2024-12-12 19:39:35.649809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.909 BaseBdev1 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.909 [ 00:10:52.909 { 00:10:52.909 "name": "BaseBdev1", 00:10:52.909 "aliases": [ 00:10:52.909 "148087a5-43b8-471b-a2bf-41e3caf80fae" 00:10:52.909 ], 00:10:52.909 "product_name": "Malloc disk", 00:10:52.909 "block_size": 512, 00:10:52.909 "num_blocks": 65536, 00:10:52.909 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:52.909 "assigned_rate_limits": { 00:10:52.909 "rw_ios_per_sec": 0, 00:10:52.909 "rw_mbytes_per_sec": 0, 00:10:52.909 "r_mbytes_per_sec": 0, 00:10:52.909 "w_mbytes_per_sec": 0 00:10:52.909 }, 00:10:52.909 "claimed": true, 00:10:52.909 "claim_type": "exclusive_write", 00:10:52.909 "zoned": false, 00:10:52.909 "supported_io_types": { 00:10:52.909 "read": true, 00:10:52.909 "write": true, 00:10:52.909 "unmap": true, 00:10:52.909 "flush": true, 00:10:52.909 "reset": true, 00:10:52.909 "nvme_admin": false, 00:10:52.909 "nvme_io": false, 00:10:52.909 "nvme_io_md": false, 00:10:52.909 "write_zeroes": true, 00:10:52.909 "zcopy": true, 00:10:52.909 "get_zone_info": false, 00:10:52.909 "zone_management": false, 00:10:52.909 "zone_append": false, 00:10:52.909 "compare": false, 00:10:52.909 "compare_and_write": false, 00:10:52.909 "abort": true, 00:10:52.909 "seek_hole": false, 00:10:52.909 "seek_data": false, 00:10:52.909 "copy": true, 00:10:52.909 "nvme_iov_md": false 00:10:52.909 }, 00:10:52.909 "memory_domains": [ 00:10:52.909 { 00:10:52.909 "dma_device_id": "system", 00:10:52.909 "dma_device_type": 1 00:10:52.909 }, 00:10:52.909 { 00:10:52.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.909 "dma_device_type": 2 00:10:52.909 } 00:10:52.909 ], 00:10:52.909 "driver_specific": {} 00:10:52.909 } 00:10:52.909 ] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.909 "name": "Existed_Raid", 00:10:52.909 "uuid": "2bea02d4-5c1c-401d-853c-3893428927ec", 00:10:52.909 "strip_size_kb": 64, 00:10:52.909 "state": "configuring", 00:10:52.909 "raid_level": "concat", 00:10:52.909 "superblock": true, 00:10:52.909 "num_base_bdevs": 4, 00:10:52.909 "num_base_bdevs_discovered": 1, 00:10:52.909 "num_base_bdevs_operational": 4, 00:10:52.909 "base_bdevs_list": [ 00:10:52.909 { 00:10:52.909 "name": "BaseBdev1", 00:10:52.909 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:52.909 "is_configured": true, 00:10:52.909 "data_offset": 2048, 00:10:52.909 "data_size": 63488 00:10:52.909 }, 00:10:52.909 { 00:10:52.909 "name": "BaseBdev2", 00:10:52.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.909 "is_configured": false, 00:10:52.909 "data_offset": 0, 00:10:52.909 "data_size": 0 00:10:52.909 }, 00:10:52.909 { 00:10:52.909 "name": "BaseBdev3", 00:10:52.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.909 "is_configured": false, 00:10:52.909 "data_offset": 0, 00:10:52.909 "data_size": 0 00:10:52.909 }, 00:10:52.909 { 00:10:52.909 "name": "BaseBdev4", 00:10:52.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.909 "is_configured": false, 00:10:52.909 "data_offset": 0, 00:10:52.909 "data_size": 0 00:10:52.909 } 00:10:52.909 ] 00:10:52.909 }' 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.909 19:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.478 [2024-12-12 19:39:36.164984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.478 [2024-12-12 19:39:36.165058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.478 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.478 [2024-12-12 19:39:36.176981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.478 [2024-12-12 19:39:36.179145] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.479 [2024-12-12 19:39:36.179206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.479 [2024-12-12 19:39:36.179216] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.479 [2024-12-12 19:39:36.179226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.479 [2024-12-12 19:39:36.179232] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.479 [2024-12-12 19:39:36.179241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.479 "name": "Existed_Raid", 00:10:53.479 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:53.479 "strip_size_kb": 64, 00:10:53.479 "state": "configuring", 00:10:53.479 "raid_level": "concat", 00:10:53.479 "superblock": true, 00:10:53.479 "num_base_bdevs": 4, 00:10:53.479 "num_base_bdevs_discovered": 1, 00:10:53.479 "num_base_bdevs_operational": 4, 00:10:53.479 "base_bdevs_list": [ 00:10:53.479 { 00:10:53.479 "name": "BaseBdev1", 00:10:53.479 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:53.479 "is_configured": true, 00:10:53.479 "data_offset": 2048, 00:10:53.479 "data_size": 63488 00:10:53.479 }, 00:10:53.479 { 00:10:53.479 "name": "BaseBdev2", 00:10:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.479 "is_configured": false, 00:10:53.479 "data_offset": 0, 00:10:53.479 "data_size": 0 00:10:53.479 }, 00:10:53.479 { 00:10:53.479 "name": "BaseBdev3", 00:10:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.479 "is_configured": false, 00:10:53.479 "data_offset": 0, 00:10:53.479 "data_size": 0 00:10:53.479 }, 00:10:53.479 { 00:10:53.479 "name": "BaseBdev4", 00:10:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.479 "is_configured": false, 00:10:53.479 "data_offset": 0, 00:10:53.479 "data_size": 0 00:10:53.479 } 00:10:53.479 ] 00:10:53.479 }' 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.479 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.048 [2024-12-12 19:39:36.711894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.048 BaseBdev2 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.048 [ 00:10:54.048 { 00:10:54.048 "name": "BaseBdev2", 00:10:54.048 "aliases": [ 00:10:54.048 "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d" 00:10:54.048 ], 00:10:54.048 "product_name": "Malloc disk", 00:10:54.048 "block_size": 512, 00:10:54.048 "num_blocks": 65536, 00:10:54.048 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:54.048 "assigned_rate_limits": { 00:10:54.048 "rw_ios_per_sec": 0, 00:10:54.048 "rw_mbytes_per_sec": 0, 00:10:54.048 "r_mbytes_per_sec": 0, 00:10:54.048 "w_mbytes_per_sec": 0 00:10:54.048 }, 00:10:54.048 "claimed": true, 00:10:54.048 "claim_type": "exclusive_write", 00:10:54.048 "zoned": false, 00:10:54.048 "supported_io_types": { 00:10:54.048 "read": true, 00:10:54.048 "write": true, 00:10:54.048 "unmap": true, 00:10:54.048 "flush": true, 00:10:54.048 "reset": true, 00:10:54.048 "nvme_admin": false, 00:10:54.048 "nvme_io": false, 00:10:54.048 "nvme_io_md": false, 00:10:54.048 "write_zeroes": true, 00:10:54.048 "zcopy": true, 00:10:54.048 "get_zone_info": false, 00:10:54.048 "zone_management": false, 00:10:54.048 "zone_append": false, 00:10:54.048 "compare": false, 00:10:54.048 "compare_and_write": false, 00:10:54.048 "abort": true, 00:10:54.048 "seek_hole": false, 00:10:54.048 "seek_data": false, 00:10:54.048 "copy": true, 00:10:54.048 "nvme_iov_md": false 00:10:54.048 }, 00:10:54.048 "memory_domains": [ 00:10:54.048 { 00:10:54.048 "dma_device_id": "system", 00:10:54.048 "dma_device_type": 1 00:10:54.048 }, 00:10:54.048 { 00:10:54.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.048 "dma_device_type": 2 00:10:54.048 } 00:10:54.048 ], 00:10:54.048 "driver_specific": {} 00:10:54.048 } 00:10:54.048 ] 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.048 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.049 "name": "Existed_Raid", 00:10:54.049 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:54.049 "strip_size_kb": 64, 00:10:54.049 "state": "configuring", 00:10:54.049 "raid_level": "concat", 00:10:54.049 "superblock": true, 00:10:54.049 "num_base_bdevs": 4, 00:10:54.049 "num_base_bdevs_discovered": 2, 00:10:54.049 "num_base_bdevs_operational": 4, 00:10:54.049 "base_bdevs_list": [ 00:10:54.049 { 00:10:54.049 "name": "BaseBdev1", 00:10:54.049 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:54.049 "is_configured": true, 00:10:54.049 "data_offset": 2048, 00:10:54.049 "data_size": 63488 00:10:54.049 }, 00:10:54.049 { 00:10:54.049 "name": "BaseBdev2", 00:10:54.049 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:54.049 "is_configured": true, 00:10:54.049 "data_offset": 2048, 00:10:54.049 "data_size": 63488 00:10:54.049 }, 00:10:54.049 { 00:10:54.049 "name": "BaseBdev3", 00:10:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.049 "is_configured": false, 00:10:54.049 "data_offset": 0, 00:10:54.049 "data_size": 0 00:10:54.049 }, 00:10:54.049 { 00:10:54.049 "name": "BaseBdev4", 00:10:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.049 "is_configured": false, 00:10:54.049 "data_offset": 0, 00:10:54.049 "data_size": 0 00:10:54.049 } 00:10:54.049 ] 00:10:54.049 }' 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.049 19:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 [2024-12-12 19:39:37.242290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.619 BaseBdev3 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 [ 00:10:54.619 { 00:10:54.619 "name": "BaseBdev3", 00:10:54.619 "aliases": [ 00:10:54.619 "2f42a945-3794-4e1a-a71b-bd2a8fbac39b" 00:10:54.619 ], 00:10:54.619 "product_name": "Malloc disk", 00:10:54.619 "block_size": 512, 00:10:54.619 "num_blocks": 65536, 00:10:54.619 "uuid": "2f42a945-3794-4e1a-a71b-bd2a8fbac39b", 00:10:54.619 "assigned_rate_limits": { 00:10:54.619 "rw_ios_per_sec": 0, 00:10:54.619 "rw_mbytes_per_sec": 0, 00:10:54.619 "r_mbytes_per_sec": 0, 00:10:54.619 "w_mbytes_per_sec": 0 00:10:54.619 }, 00:10:54.619 "claimed": true, 00:10:54.619 "claim_type": "exclusive_write", 00:10:54.619 "zoned": false, 00:10:54.619 "supported_io_types": { 00:10:54.619 "read": true, 00:10:54.619 "write": true, 00:10:54.619 "unmap": true, 00:10:54.619 "flush": true, 00:10:54.619 "reset": true, 00:10:54.619 "nvme_admin": false, 00:10:54.619 "nvme_io": false, 00:10:54.619 "nvme_io_md": false, 00:10:54.619 "write_zeroes": true, 00:10:54.619 "zcopy": true, 00:10:54.619 "get_zone_info": false, 00:10:54.619 "zone_management": false, 00:10:54.619 "zone_append": false, 00:10:54.619 "compare": false, 00:10:54.619 "compare_and_write": false, 00:10:54.619 "abort": true, 00:10:54.619 "seek_hole": false, 00:10:54.619 "seek_data": false, 00:10:54.619 "copy": true, 00:10:54.619 "nvme_iov_md": false 00:10:54.619 }, 00:10:54.619 "memory_domains": [ 00:10:54.619 { 00:10:54.619 "dma_device_id": "system", 00:10:54.619 "dma_device_type": 1 00:10:54.619 }, 00:10:54.619 { 00:10:54.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.619 "dma_device_type": 2 00:10:54.619 } 00:10:54.619 ], 00:10:54.619 "driver_specific": {} 00:10:54.619 } 00:10:54.619 ] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.619 "name": "Existed_Raid", 00:10:54.619 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:54.619 "strip_size_kb": 64, 00:10:54.619 "state": "configuring", 00:10:54.619 "raid_level": "concat", 00:10:54.619 "superblock": true, 00:10:54.619 "num_base_bdevs": 4, 00:10:54.619 "num_base_bdevs_discovered": 3, 00:10:54.619 "num_base_bdevs_operational": 4, 00:10:54.619 "base_bdevs_list": [ 00:10:54.619 { 00:10:54.619 "name": "BaseBdev1", 00:10:54.619 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:54.619 "is_configured": true, 00:10:54.619 "data_offset": 2048, 00:10:54.619 "data_size": 63488 00:10:54.619 }, 00:10:54.619 { 00:10:54.619 "name": "BaseBdev2", 00:10:54.619 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:54.619 "is_configured": true, 00:10:54.619 "data_offset": 2048, 00:10:54.619 "data_size": 63488 00:10:54.619 }, 00:10:54.619 { 00:10:54.619 "name": "BaseBdev3", 00:10:54.619 "uuid": "2f42a945-3794-4e1a-a71b-bd2a8fbac39b", 00:10:54.619 "is_configured": true, 00:10:54.619 "data_offset": 2048, 00:10:54.619 "data_size": 63488 00:10:54.619 }, 00:10:54.619 { 00:10:54.619 "name": "BaseBdev4", 00:10:54.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.619 "is_configured": false, 00:10:54.619 "data_offset": 0, 00:10:54.619 "data_size": 0 00:10:54.619 } 00:10:54.619 ] 00:10:54.619 }' 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.619 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.879 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.879 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.879 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 [2024-12-12 19:39:37.741689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.139 [2024-12-12 19:39:37.742097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.139 [2024-12-12 19:39:37.742149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.139 [2024-12-12 19:39:37.742497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.139 [2024-12-12 19:39:37.742709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.139 [2024-12-12 19:39:37.742753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.139 [2024-12-12 19:39:37.742994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.139 BaseBdev4 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 [ 00:10:55.139 { 00:10:55.139 "name": "BaseBdev4", 00:10:55.139 "aliases": [ 00:10:55.139 "fbdf3ded-5d93-4ff2-85b4-1181029f4fa3" 00:10:55.139 ], 00:10:55.139 "product_name": "Malloc disk", 00:10:55.139 "block_size": 512, 00:10:55.139 "num_blocks": 65536, 00:10:55.139 "uuid": "fbdf3ded-5d93-4ff2-85b4-1181029f4fa3", 00:10:55.139 "assigned_rate_limits": { 00:10:55.139 "rw_ios_per_sec": 0, 00:10:55.139 "rw_mbytes_per_sec": 0, 00:10:55.139 "r_mbytes_per_sec": 0, 00:10:55.139 "w_mbytes_per_sec": 0 00:10:55.139 }, 00:10:55.139 "claimed": true, 00:10:55.139 "claim_type": "exclusive_write", 00:10:55.139 "zoned": false, 00:10:55.139 "supported_io_types": { 00:10:55.139 "read": true, 00:10:55.139 "write": true, 00:10:55.139 "unmap": true, 00:10:55.139 "flush": true, 00:10:55.139 "reset": true, 00:10:55.139 "nvme_admin": false, 00:10:55.139 "nvme_io": false, 00:10:55.139 "nvme_io_md": false, 00:10:55.139 "write_zeroes": true, 00:10:55.139 "zcopy": true, 00:10:55.139 "get_zone_info": false, 00:10:55.139 "zone_management": false, 00:10:55.139 "zone_append": false, 00:10:55.139 "compare": false, 00:10:55.139 "compare_and_write": false, 00:10:55.139 "abort": true, 00:10:55.139 "seek_hole": false, 00:10:55.139 "seek_data": false, 00:10:55.139 "copy": true, 00:10:55.139 "nvme_iov_md": false 00:10:55.139 }, 00:10:55.139 "memory_domains": [ 00:10:55.139 { 00:10:55.139 "dma_device_id": "system", 00:10:55.139 "dma_device_type": 1 00:10:55.139 }, 00:10:55.139 { 00:10:55.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.139 "dma_device_type": 2 00:10:55.139 } 00:10:55.139 ], 00:10:55.139 "driver_specific": {} 00:10:55.139 } 00:10:55.139 ] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.139 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.139 "name": "Existed_Raid", 00:10:55.139 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:55.139 "strip_size_kb": 64, 00:10:55.139 "state": "online", 00:10:55.139 "raid_level": "concat", 00:10:55.139 "superblock": true, 00:10:55.139 "num_base_bdevs": 4, 00:10:55.139 "num_base_bdevs_discovered": 4, 00:10:55.139 "num_base_bdevs_operational": 4, 00:10:55.139 "base_bdevs_list": [ 00:10:55.139 { 00:10:55.139 "name": "BaseBdev1", 00:10:55.139 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:55.139 "is_configured": true, 00:10:55.139 "data_offset": 2048, 00:10:55.139 "data_size": 63488 00:10:55.139 }, 00:10:55.139 { 00:10:55.139 "name": "BaseBdev2", 00:10:55.139 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:55.139 "is_configured": true, 00:10:55.139 "data_offset": 2048, 00:10:55.139 "data_size": 63488 00:10:55.139 }, 00:10:55.139 { 00:10:55.139 "name": "BaseBdev3", 00:10:55.139 "uuid": "2f42a945-3794-4e1a-a71b-bd2a8fbac39b", 00:10:55.139 "is_configured": true, 00:10:55.139 "data_offset": 2048, 00:10:55.139 "data_size": 63488 00:10:55.139 }, 00:10:55.139 { 00:10:55.139 "name": "BaseBdev4", 00:10:55.139 "uuid": "fbdf3ded-5d93-4ff2-85b4-1181029f4fa3", 00:10:55.139 "is_configured": true, 00:10:55.140 "data_offset": 2048, 00:10:55.140 "data_size": 63488 00:10:55.140 } 00:10:55.140 ] 00:10:55.140 }' 00:10:55.140 19:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.140 19:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.399 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.399 [2024-12-12 19:39:38.221375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.659 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.659 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.659 "name": "Existed_Raid", 00:10:55.659 "aliases": [ 00:10:55.659 "6160a345-042c-4ebf-834b-89b77b049450" 00:10:55.659 ], 00:10:55.659 "product_name": "Raid Volume", 00:10:55.659 "block_size": 512, 00:10:55.659 "num_blocks": 253952, 00:10:55.659 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:55.659 "assigned_rate_limits": { 00:10:55.659 "rw_ios_per_sec": 0, 00:10:55.659 "rw_mbytes_per_sec": 0, 00:10:55.659 "r_mbytes_per_sec": 0, 00:10:55.659 "w_mbytes_per_sec": 0 00:10:55.659 }, 00:10:55.659 "claimed": false, 00:10:55.659 "zoned": false, 00:10:55.659 "supported_io_types": { 00:10:55.659 "read": true, 00:10:55.659 "write": true, 00:10:55.659 "unmap": true, 00:10:55.659 "flush": true, 00:10:55.659 "reset": true, 00:10:55.659 "nvme_admin": false, 00:10:55.659 "nvme_io": false, 00:10:55.659 "nvme_io_md": false, 00:10:55.659 "write_zeroes": true, 00:10:55.659 "zcopy": false, 00:10:55.659 "get_zone_info": false, 00:10:55.659 "zone_management": false, 00:10:55.659 "zone_append": false, 00:10:55.659 "compare": false, 00:10:55.659 "compare_and_write": false, 00:10:55.659 "abort": false, 00:10:55.659 "seek_hole": false, 00:10:55.659 "seek_data": false, 00:10:55.659 "copy": false, 00:10:55.659 "nvme_iov_md": false 00:10:55.659 }, 00:10:55.659 "memory_domains": [ 00:10:55.659 { 00:10:55.659 "dma_device_id": "system", 00:10:55.659 "dma_device_type": 1 00:10:55.659 }, 00:10:55.659 { 00:10:55.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.660 "dma_device_type": 2 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "system", 00:10:55.660 "dma_device_type": 1 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.660 "dma_device_type": 2 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "system", 00:10:55.660 "dma_device_type": 1 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.660 "dma_device_type": 2 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "system", 00:10:55.660 "dma_device_type": 1 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.660 "dma_device_type": 2 00:10:55.660 } 00:10:55.660 ], 00:10:55.660 "driver_specific": { 00:10:55.660 "raid": { 00:10:55.660 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:55.660 "strip_size_kb": 64, 00:10:55.660 "state": "online", 00:10:55.660 "raid_level": "concat", 00:10:55.660 "superblock": true, 00:10:55.660 "num_base_bdevs": 4, 00:10:55.660 "num_base_bdevs_discovered": 4, 00:10:55.660 "num_base_bdevs_operational": 4, 00:10:55.660 "base_bdevs_list": [ 00:10:55.660 { 00:10:55.660 "name": "BaseBdev1", 00:10:55.660 "uuid": "148087a5-43b8-471b-a2bf-41e3caf80fae", 00:10:55.660 "is_configured": true, 00:10:55.660 "data_offset": 2048, 00:10:55.660 "data_size": 63488 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "name": "BaseBdev2", 00:10:55.660 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:55.660 "is_configured": true, 00:10:55.660 "data_offset": 2048, 00:10:55.660 "data_size": 63488 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "name": "BaseBdev3", 00:10:55.660 "uuid": "2f42a945-3794-4e1a-a71b-bd2a8fbac39b", 00:10:55.660 "is_configured": true, 00:10:55.660 "data_offset": 2048, 00:10:55.660 "data_size": 63488 00:10:55.660 }, 00:10:55.660 { 00:10:55.660 "name": "BaseBdev4", 00:10:55.660 "uuid": "fbdf3ded-5d93-4ff2-85b4-1181029f4fa3", 00:10:55.660 "is_configured": true, 00:10:55.660 "data_offset": 2048, 00:10:55.660 "data_size": 63488 00:10:55.660 } 00:10:55.660 ] 00:10:55.660 } 00:10:55.660 } 00:10:55.660 }' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:55.660 BaseBdev2 00:10:55.660 BaseBdev3 00:10:55.660 BaseBdev4' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.660 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 [2024-12-12 19:39:38.560485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.920 [2024-12-12 19:39:38.560535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.920 [2024-12-12 19:39:38.560607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.920 "name": "Existed_Raid", 00:10:55.920 "uuid": "6160a345-042c-4ebf-834b-89b77b049450", 00:10:55.920 "strip_size_kb": 64, 00:10:55.920 "state": "offline", 00:10:55.920 "raid_level": "concat", 00:10:55.920 "superblock": true, 00:10:55.920 "num_base_bdevs": 4, 00:10:55.920 "num_base_bdevs_discovered": 3, 00:10:55.920 "num_base_bdevs_operational": 3, 00:10:55.920 "base_bdevs_list": [ 00:10:55.920 { 00:10:55.920 "name": null, 00:10:55.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.920 "is_configured": false, 00:10:55.920 "data_offset": 0, 00:10:55.920 "data_size": 63488 00:10:55.920 }, 00:10:55.920 { 00:10:55.920 "name": "BaseBdev2", 00:10:55.920 "uuid": "b410cfd0-6fdb-43ab-b4c5-3a1079686f8d", 00:10:55.920 "is_configured": true, 00:10:55.920 "data_offset": 2048, 00:10:55.920 "data_size": 63488 00:10:55.920 }, 00:10:55.920 { 00:10:55.920 "name": "BaseBdev3", 00:10:55.920 "uuid": "2f42a945-3794-4e1a-a71b-bd2a8fbac39b", 00:10:55.920 "is_configured": true, 00:10:55.920 "data_offset": 2048, 00:10:55.920 "data_size": 63488 00:10:55.920 }, 00:10:55.920 { 00:10:55.920 "name": "BaseBdev4", 00:10:55.920 "uuid": "fbdf3ded-5d93-4ff2-85b4-1181029f4fa3", 00:10:55.920 "is_configured": true, 00:10:55.920 "data_offset": 2048, 00:10:55.920 "data_size": 63488 00:10:55.920 } 00:10:55.920 ] 00:10:55.920 }' 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.920 19:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.489 [2024-12-12 19:39:39.125561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.489 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.490 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.490 [2024-12-12 19:39:39.286150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.750 [2024-12-12 19:39:39.443067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:56.750 [2024-12-12 19:39:39.443230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.750 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.010 BaseBdev2 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.010 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.010 [ 00:10:57.010 { 00:10:57.010 "name": "BaseBdev2", 00:10:57.010 "aliases": [ 00:10:57.010 "79403c69-b883-48cc-8660-2be04ca1cb37" 00:10:57.010 ], 00:10:57.010 "product_name": "Malloc disk", 00:10:57.010 "block_size": 512, 00:10:57.010 "num_blocks": 65536, 00:10:57.010 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:57.010 "assigned_rate_limits": { 00:10:57.010 "rw_ios_per_sec": 0, 00:10:57.010 "rw_mbytes_per_sec": 0, 00:10:57.010 "r_mbytes_per_sec": 0, 00:10:57.010 "w_mbytes_per_sec": 0 00:10:57.010 }, 00:10:57.010 "claimed": false, 00:10:57.010 "zoned": false, 00:10:57.010 "supported_io_types": { 00:10:57.010 "read": true, 00:10:57.010 "write": true, 00:10:57.010 "unmap": true, 00:10:57.010 "flush": true, 00:10:57.010 "reset": true, 00:10:57.010 "nvme_admin": false, 00:10:57.010 "nvme_io": false, 00:10:57.010 "nvme_io_md": false, 00:10:57.010 "write_zeroes": true, 00:10:57.010 "zcopy": true, 00:10:57.011 "get_zone_info": false, 00:10:57.011 "zone_management": false, 00:10:57.011 "zone_append": false, 00:10:57.011 "compare": false, 00:10:57.011 "compare_and_write": false, 00:10:57.011 "abort": true, 00:10:57.011 "seek_hole": false, 00:10:57.011 "seek_data": false, 00:10:57.011 "copy": true, 00:10:57.011 "nvme_iov_md": false 00:10:57.011 }, 00:10:57.011 "memory_domains": [ 00:10:57.011 { 00:10:57.011 "dma_device_id": "system", 00:10:57.011 "dma_device_type": 1 00:10:57.011 }, 00:10:57.011 { 00:10:57.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.011 "dma_device_type": 2 00:10:57.011 } 00:10:57.011 ], 00:10:57.011 "driver_specific": {} 00:10:57.011 } 00:10:57.011 ] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 BaseBdev3 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 [ 00:10:57.011 { 00:10:57.011 "name": "BaseBdev3", 00:10:57.011 "aliases": [ 00:10:57.011 "5527cfbe-5a08-4fb1-8895-09ceda207b08" 00:10:57.011 ], 00:10:57.011 "product_name": "Malloc disk", 00:10:57.011 "block_size": 512, 00:10:57.011 "num_blocks": 65536, 00:10:57.011 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:57.011 "assigned_rate_limits": { 00:10:57.011 "rw_ios_per_sec": 0, 00:10:57.011 "rw_mbytes_per_sec": 0, 00:10:57.011 "r_mbytes_per_sec": 0, 00:10:57.011 "w_mbytes_per_sec": 0 00:10:57.011 }, 00:10:57.011 "claimed": false, 00:10:57.011 "zoned": false, 00:10:57.011 "supported_io_types": { 00:10:57.011 "read": true, 00:10:57.011 "write": true, 00:10:57.011 "unmap": true, 00:10:57.011 "flush": true, 00:10:57.011 "reset": true, 00:10:57.011 "nvme_admin": false, 00:10:57.011 "nvme_io": false, 00:10:57.011 "nvme_io_md": false, 00:10:57.011 "write_zeroes": true, 00:10:57.011 "zcopy": true, 00:10:57.011 "get_zone_info": false, 00:10:57.011 "zone_management": false, 00:10:57.011 "zone_append": false, 00:10:57.011 "compare": false, 00:10:57.011 "compare_and_write": false, 00:10:57.011 "abort": true, 00:10:57.011 "seek_hole": false, 00:10:57.011 "seek_data": false, 00:10:57.011 "copy": true, 00:10:57.011 "nvme_iov_md": false 00:10:57.011 }, 00:10:57.011 "memory_domains": [ 00:10:57.011 { 00:10:57.011 "dma_device_id": "system", 00:10:57.011 "dma_device_type": 1 00:10:57.011 }, 00:10:57.011 { 00:10:57.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.011 "dma_device_type": 2 00:10:57.011 } 00:10:57.011 ], 00:10:57.011 "driver_specific": {} 00:10:57.011 } 00:10:57.011 ] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 BaseBdev4 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 [ 00:10:57.011 { 00:10:57.011 "name": "BaseBdev4", 00:10:57.011 "aliases": [ 00:10:57.011 "fbc75940-83de-4bc3-9808-65775e26a081" 00:10:57.011 ], 00:10:57.011 "product_name": "Malloc disk", 00:10:57.011 "block_size": 512, 00:10:57.011 "num_blocks": 65536, 00:10:57.011 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:57.011 "assigned_rate_limits": { 00:10:57.011 "rw_ios_per_sec": 0, 00:10:57.011 "rw_mbytes_per_sec": 0, 00:10:57.011 "r_mbytes_per_sec": 0, 00:10:57.011 "w_mbytes_per_sec": 0 00:10:57.011 }, 00:10:57.011 "claimed": false, 00:10:57.011 "zoned": false, 00:10:57.011 "supported_io_types": { 00:10:57.011 "read": true, 00:10:57.011 "write": true, 00:10:57.011 "unmap": true, 00:10:57.011 "flush": true, 00:10:57.011 "reset": true, 00:10:57.011 "nvme_admin": false, 00:10:57.011 "nvme_io": false, 00:10:57.011 "nvme_io_md": false, 00:10:57.011 "write_zeroes": true, 00:10:57.011 "zcopy": true, 00:10:57.011 "get_zone_info": false, 00:10:57.011 "zone_management": false, 00:10:57.011 "zone_append": false, 00:10:57.011 "compare": false, 00:10:57.011 "compare_and_write": false, 00:10:57.011 "abort": true, 00:10:57.011 "seek_hole": false, 00:10:57.011 "seek_data": false, 00:10:57.011 "copy": true, 00:10:57.011 "nvme_iov_md": false 00:10:57.011 }, 00:10:57.011 "memory_domains": [ 00:10:57.011 { 00:10:57.011 "dma_device_id": "system", 00:10:57.011 "dma_device_type": 1 00:10:57.011 }, 00:10:57.011 { 00:10:57.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.011 "dma_device_type": 2 00:10:57.011 } 00:10:57.011 ], 00:10:57.011 "driver_specific": {} 00:10:57.011 } 00:10:57.011 ] 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.271 [2024-12-12 19:39:39.860854] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.271 [2024-12-12 19:39:39.860994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.271 [2024-12-12 19:39:39.861030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.271 [2024-12-12 19:39:39.863276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.271 [2024-12-12 19:39:39.863334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.271 "name": "Existed_Raid", 00:10:57.271 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:57.271 "strip_size_kb": 64, 00:10:57.271 "state": "configuring", 00:10:57.271 "raid_level": "concat", 00:10:57.271 "superblock": true, 00:10:57.271 "num_base_bdevs": 4, 00:10:57.271 "num_base_bdevs_discovered": 3, 00:10:57.271 "num_base_bdevs_operational": 4, 00:10:57.271 "base_bdevs_list": [ 00:10:57.271 { 00:10:57.271 "name": "BaseBdev1", 00:10:57.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.271 "is_configured": false, 00:10:57.271 "data_offset": 0, 00:10:57.271 "data_size": 0 00:10:57.271 }, 00:10:57.271 { 00:10:57.271 "name": "BaseBdev2", 00:10:57.271 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:57.271 "is_configured": true, 00:10:57.271 "data_offset": 2048, 00:10:57.271 "data_size": 63488 00:10:57.271 }, 00:10:57.271 { 00:10:57.271 "name": "BaseBdev3", 00:10:57.271 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:57.271 "is_configured": true, 00:10:57.271 "data_offset": 2048, 00:10:57.271 "data_size": 63488 00:10:57.271 }, 00:10:57.271 { 00:10:57.271 "name": "BaseBdev4", 00:10:57.271 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:57.271 "is_configured": true, 00:10:57.271 "data_offset": 2048, 00:10:57.271 "data_size": 63488 00:10:57.271 } 00:10:57.271 ] 00:10:57.271 }' 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.271 19:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.531 [2024-12-12 19:39:40.324083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.531 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.791 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.791 "name": "Existed_Raid", 00:10:57.791 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:57.791 "strip_size_kb": 64, 00:10:57.791 "state": "configuring", 00:10:57.791 "raid_level": "concat", 00:10:57.791 "superblock": true, 00:10:57.791 "num_base_bdevs": 4, 00:10:57.791 "num_base_bdevs_discovered": 2, 00:10:57.791 "num_base_bdevs_operational": 4, 00:10:57.791 "base_bdevs_list": [ 00:10:57.791 { 00:10:57.791 "name": "BaseBdev1", 00:10:57.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.791 "is_configured": false, 00:10:57.791 "data_offset": 0, 00:10:57.791 "data_size": 0 00:10:57.791 }, 00:10:57.791 { 00:10:57.791 "name": null, 00:10:57.791 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:57.791 "is_configured": false, 00:10:57.791 "data_offset": 0, 00:10:57.791 "data_size": 63488 00:10:57.791 }, 00:10:57.791 { 00:10:57.791 "name": "BaseBdev3", 00:10:57.791 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:57.791 "is_configured": true, 00:10:57.791 "data_offset": 2048, 00:10:57.791 "data_size": 63488 00:10:57.791 }, 00:10:57.791 { 00:10:57.791 "name": "BaseBdev4", 00:10:57.791 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:57.791 "is_configured": true, 00:10:57.791 "data_offset": 2048, 00:10:57.791 "data_size": 63488 00:10:57.791 } 00:10:57.791 ] 00:10:57.791 }' 00:10:57.791 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.791 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.051 [2024-12-12 19:39:40.834159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.051 BaseBdev1 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.051 [ 00:10:58.051 { 00:10:58.051 "name": "BaseBdev1", 00:10:58.051 "aliases": [ 00:10:58.051 "82bb137f-c202-4569-9c47-476ca26dd34a" 00:10:58.051 ], 00:10:58.051 "product_name": "Malloc disk", 00:10:58.051 "block_size": 512, 00:10:58.051 "num_blocks": 65536, 00:10:58.051 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:10:58.051 "assigned_rate_limits": { 00:10:58.051 "rw_ios_per_sec": 0, 00:10:58.051 "rw_mbytes_per_sec": 0, 00:10:58.051 "r_mbytes_per_sec": 0, 00:10:58.051 "w_mbytes_per_sec": 0 00:10:58.051 }, 00:10:58.051 "claimed": true, 00:10:58.051 "claim_type": "exclusive_write", 00:10:58.051 "zoned": false, 00:10:58.051 "supported_io_types": { 00:10:58.051 "read": true, 00:10:58.051 "write": true, 00:10:58.051 "unmap": true, 00:10:58.051 "flush": true, 00:10:58.051 "reset": true, 00:10:58.051 "nvme_admin": false, 00:10:58.051 "nvme_io": false, 00:10:58.051 "nvme_io_md": false, 00:10:58.051 "write_zeroes": true, 00:10:58.051 "zcopy": true, 00:10:58.051 "get_zone_info": false, 00:10:58.051 "zone_management": false, 00:10:58.051 "zone_append": false, 00:10:58.051 "compare": false, 00:10:58.051 "compare_and_write": false, 00:10:58.051 "abort": true, 00:10:58.051 "seek_hole": false, 00:10:58.051 "seek_data": false, 00:10:58.051 "copy": true, 00:10:58.051 "nvme_iov_md": false 00:10:58.051 }, 00:10:58.051 "memory_domains": [ 00:10:58.051 { 00:10:58.051 "dma_device_id": "system", 00:10:58.051 "dma_device_type": 1 00:10:58.051 }, 00:10:58.051 { 00:10:58.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.051 "dma_device_type": 2 00:10:58.051 } 00:10:58.051 ], 00:10:58.051 "driver_specific": {} 00:10:58.051 } 00:10:58.051 ] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.051 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.052 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.311 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.311 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.311 "name": "Existed_Raid", 00:10:58.311 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:58.311 "strip_size_kb": 64, 00:10:58.311 "state": "configuring", 00:10:58.311 "raid_level": "concat", 00:10:58.311 "superblock": true, 00:10:58.311 "num_base_bdevs": 4, 00:10:58.311 "num_base_bdevs_discovered": 3, 00:10:58.311 "num_base_bdevs_operational": 4, 00:10:58.311 "base_bdevs_list": [ 00:10:58.311 { 00:10:58.311 "name": "BaseBdev1", 00:10:58.311 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:10:58.311 "is_configured": true, 00:10:58.311 "data_offset": 2048, 00:10:58.311 "data_size": 63488 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "name": null, 00:10:58.311 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:58.311 "is_configured": false, 00:10:58.311 "data_offset": 0, 00:10:58.311 "data_size": 63488 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "name": "BaseBdev3", 00:10:58.311 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:58.311 "is_configured": true, 00:10:58.311 "data_offset": 2048, 00:10:58.311 "data_size": 63488 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "name": "BaseBdev4", 00:10:58.311 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:58.311 "is_configured": true, 00:10:58.311 "data_offset": 2048, 00:10:58.312 "data_size": 63488 00:10:58.312 } 00:10:58.312 ] 00:10:58.312 }' 00:10:58.312 19:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.312 19:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.571 [2024-12-12 19:39:41.333458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.571 "name": "Existed_Raid", 00:10:58.571 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:58.571 "strip_size_kb": 64, 00:10:58.571 "state": "configuring", 00:10:58.571 "raid_level": "concat", 00:10:58.571 "superblock": true, 00:10:58.571 "num_base_bdevs": 4, 00:10:58.571 "num_base_bdevs_discovered": 2, 00:10:58.571 "num_base_bdevs_operational": 4, 00:10:58.571 "base_bdevs_list": [ 00:10:58.571 { 00:10:58.571 "name": "BaseBdev1", 00:10:58.571 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:10:58.571 "is_configured": true, 00:10:58.571 "data_offset": 2048, 00:10:58.571 "data_size": 63488 00:10:58.571 }, 00:10:58.571 { 00:10:58.571 "name": null, 00:10:58.571 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:58.571 "is_configured": false, 00:10:58.571 "data_offset": 0, 00:10:58.571 "data_size": 63488 00:10:58.571 }, 00:10:58.571 { 00:10:58.571 "name": null, 00:10:58.571 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:58.571 "is_configured": false, 00:10:58.571 "data_offset": 0, 00:10:58.571 "data_size": 63488 00:10:58.571 }, 00:10:58.571 { 00:10:58.571 "name": "BaseBdev4", 00:10:58.571 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:58.571 "is_configured": true, 00:10:58.571 "data_offset": 2048, 00:10:58.571 "data_size": 63488 00:10:58.571 } 00:10:58.571 ] 00:10:58.571 }' 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.571 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-12-12 19:39:41.848508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.166 "name": "Existed_Raid", 00:10:59.166 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:59.166 "strip_size_kb": 64, 00:10:59.166 "state": "configuring", 00:10:59.166 "raid_level": "concat", 00:10:59.166 "superblock": true, 00:10:59.166 "num_base_bdevs": 4, 00:10:59.166 "num_base_bdevs_discovered": 3, 00:10:59.166 "num_base_bdevs_operational": 4, 00:10:59.166 "base_bdevs_list": [ 00:10:59.166 { 00:10:59.166 "name": "BaseBdev1", 00:10:59.166 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:10:59.166 "is_configured": true, 00:10:59.166 "data_offset": 2048, 00:10:59.166 "data_size": 63488 00:10:59.166 }, 00:10:59.166 { 00:10:59.166 "name": null, 00:10:59.166 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:59.166 "is_configured": false, 00:10:59.166 "data_offset": 0, 00:10:59.166 "data_size": 63488 00:10:59.166 }, 00:10:59.166 { 00:10:59.166 "name": "BaseBdev3", 00:10:59.166 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:59.166 "is_configured": true, 00:10:59.166 "data_offset": 2048, 00:10:59.166 "data_size": 63488 00:10:59.166 }, 00:10:59.166 { 00:10:59.166 "name": "BaseBdev4", 00:10:59.166 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:59.166 "is_configured": true, 00:10:59.166 "data_offset": 2048, 00:10:59.166 "data_size": 63488 00:10:59.166 } 00:10:59.166 ] 00:10:59.166 }' 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.166 19:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 [2024-12-12 19:39:42.371695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.752 "name": "Existed_Raid", 00:10:59.752 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:10:59.752 "strip_size_kb": 64, 00:10:59.752 "state": "configuring", 00:10:59.752 "raid_level": "concat", 00:10:59.752 "superblock": true, 00:10:59.752 "num_base_bdevs": 4, 00:10:59.752 "num_base_bdevs_discovered": 2, 00:10:59.752 "num_base_bdevs_operational": 4, 00:10:59.752 "base_bdevs_list": [ 00:10:59.752 { 00:10:59.752 "name": null, 00:10:59.752 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:10:59.752 "is_configured": false, 00:10:59.752 "data_offset": 0, 00:10:59.752 "data_size": 63488 00:10:59.752 }, 00:10:59.752 { 00:10:59.752 "name": null, 00:10:59.752 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:10:59.752 "is_configured": false, 00:10:59.752 "data_offset": 0, 00:10:59.752 "data_size": 63488 00:10:59.752 }, 00:10:59.752 { 00:10:59.752 "name": "BaseBdev3", 00:10:59.752 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:10:59.752 "is_configured": true, 00:10:59.752 "data_offset": 2048, 00:10:59.752 "data_size": 63488 00:10:59.752 }, 00:10:59.752 { 00:10:59.752 "name": "BaseBdev4", 00:10:59.752 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:10:59.752 "is_configured": true, 00:10:59.752 "data_offset": 2048, 00:10:59.752 "data_size": 63488 00:10:59.752 } 00:10:59.752 ] 00:10:59.752 }' 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.752 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.322 [2024-12-12 19:39:42.947656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.322 19:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.322 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.322 "name": "Existed_Raid", 00:11:00.322 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:11:00.322 "strip_size_kb": 64, 00:11:00.322 "state": "configuring", 00:11:00.322 "raid_level": "concat", 00:11:00.322 "superblock": true, 00:11:00.322 "num_base_bdevs": 4, 00:11:00.322 "num_base_bdevs_discovered": 3, 00:11:00.322 "num_base_bdevs_operational": 4, 00:11:00.322 "base_bdevs_list": [ 00:11:00.322 { 00:11:00.322 "name": null, 00:11:00.322 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:11:00.322 "is_configured": false, 00:11:00.322 "data_offset": 0, 00:11:00.322 "data_size": 63488 00:11:00.322 }, 00:11:00.322 { 00:11:00.322 "name": "BaseBdev2", 00:11:00.322 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:11:00.322 "is_configured": true, 00:11:00.322 "data_offset": 2048, 00:11:00.322 "data_size": 63488 00:11:00.322 }, 00:11:00.322 { 00:11:00.322 "name": "BaseBdev3", 00:11:00.322 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:11:00.322 "is_configured": true, 00:11:00.322 "data_offset": 2048, 00:11:00.322 "data_size": 63488 00:11:00.322 }, 00:11:00.322 { 00:11:00.322 "name": "BaseBdev4", 00:11:00.322 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:11:00.322 "is_configured": true, 00:11:00.322 "data_offset": 2048, 00:11:00.322 "data_size": 63488 00:11:00.322 } 00:11:00.322 ] 00:11:00.322 }' 00:11:00.322 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.322 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82bb137f-c202-4569-9c47-476ca26dd34a 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 [2024-12-12 19:39:43.573890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.892 [2024-12-12 19:39:43.574146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.892 [2024-12-12 19:39:43.574159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.892 [2024-12-12 19:39:43.574448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.892 NewBaseBdev 00:11:00.892 [2024-12-12 19:39:43.574616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.892 [2024-12-12 19:39:43.574629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:00.892 [2024-12-12 19:39:43.574833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.892 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.892 [ 00:11:00.892 { 00:11:00.892 "name": "NewBaseBdev", 00:11:00.892 "aliases": [ 00:11:00.892 "82bb137f-c202-4569-9c47-476ca26dd34a" 00:11:00.892 ], 00:11:00.892 "product_name": "Malloc disk", 00:11:00.892 "block_size": 512, 00:11:00.892 "num_blocks": 65536, 00:11:00.892 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:11:00.892 "assigned_rate_limits": { 00:11:00.892 "rw_ios_per_sec": 0, 00:11:00.892 "rw_mbytes_per_sec": 0, 00:11:00.892 "r_mbytes_per_sec": 0, 00:11:00.893 "w_mbytes_per_sec": 0 00:11:00.893 }, 00:11:00.893 "claimed": true, 00:11:00.893 "claim_type": "exclusive_write", 00:11:00.893 "zoned": false, 00:11:00.893 "supported_io_types": { 00:11:00.893 "read": true, 00:11:00.893 "write": true, 00:11:00.893 "unmap": true, 00:11:00.893 "flush": true, 00:11:00.893 "reset": true, 00:11:00.893 "nvme_admin": false, 00:11:00.893 "nvme_io": false, 00:11:00.893 "nvme_io_md": false, 00:11:00.893 "write_zeroes": true, 00:11:00.893 "zcopy": true, 00:11:00.893 "get_zone_info": false, 00:11:00.893 "zone_management": false, 00:11:00.893 "zone_append": false, 00:11:00.893 "compare": false, 00:11:00.893 "compare_and_write": false, 00:11:00.893 "abort": true, 00:11:00.893 "seek_hole": false, 00:11:00.893 "seek_data": false, 00:11:00.893 "copy": true, 00:11:00.893 "nvme_iov_md": false 00:11:00.893 }, 00:11:00.893 "memory_domains": [ 00:11:00.893 { 00:11:00.893 "dma_device_id": "system", 00:11:00.893 "dma_device_type": 1 00:11:00.893 }, 00:11:00.893 { 00:11:00.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.893 "dma_device_type": 2 00:11:00.893 } 00:11:00.893 ], 00:11:00.893 "driver_specific": {} 00:11:00.893 } 00:11:00.893 ] 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.893 "name": "Existed_Raid", 00:11:00.893 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:11:00.893 "strip_size_kb": 64, 00:11:00.893 "state": "online", 00:11:00.893 "raid_level": "concat", 00:11:00.893 "superblock": true, 00:11:00.893 "num_base_bdevs": 4, 00:11:00.893 "num_base_bdevs_discovered": 4, 00:11:00.893 "num_base_bdevs_operational": 4, 00:11:00.893 "base_bdevs_list": [ 00:11:00.893 { 00:11:00.893 "name": "NewBaseBdev", 00:11:00.893 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:11:00.893 "is_configured": true, 00:11:00.893 "data_offset": 2048, 00:11:00.893 "data_size": 63488 00:11:00.893 }, 00:11:00.893 { 00:11:00.893 "name": "BaseBdev2", 00:11:00.893 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:11:00.893 "is_configured": true, 00:11:00.893 "data_offset": 2048, 00:11:00.893 "data_size": 63488 00:11:00.893 }, 00:11:00.893 { 00:11:00.893 "name": "BaseBdev3", 00:11:00.893 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:11:00.893 "is_configured": true, 00:11:00.893 "data_offset": 2048, 00:11:00.893 "data_size": 63488 00:11:00.893 }, 00:11:00.893 { 00:11:00.893 "name": "BaseBdev4", 00:11:00.893 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:11:00.893 "is_configured": true, 00:11:00.893 "data_offset": 2048, 00:11:00.893 "data_size": 63488 00:11:00.893 } 00:11:00.893 ] 00:11:00.893 }' 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.893 19:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.462 [2024-12-12 19:39:44.017629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.462 "name": "Existed_Raid", 00:11:01.462 "aliases": [ 00:11:01.462 "11e282d7-3743-4713-9561-52d1ef3d3612" 00:11:01.462 ], 00:11:01.462 "product_name": "Raid Volume", 00:11:01.462 "block_size": 512, 00:11:01.462 "num_blocks": 253952, 00:11:01.462 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:11:01.462 "assigned_rate_limits": { 00:11:01.462 "rw_ios_per_sec": 0, 00:11:01.462 "rw_mbytes_per_sec": 0, 00:11:01.462 "r_mbytes_per_sec": 0, 00:11:01.462 "w_mbytes_per_sec": 0 00:11:01.462 }, 00:11:01.462 "claimed": false, 00:11:01.462 "zoned": false, 00:11:01.462 "supported_io_types": { 00:11:01.462 "read": true, 00:11:01.462 "write": true, 00:11:01.462 "unmap": true, 00:11:01.462 "flush": true, 00:11:01.462 "reset": true, 00:11:01.462 "nvme_admin": false, 00:11:01.462 "nvme_io": false, 00:11:01.462 "nvme_io_md": false, 00:11:01.462 "write_zeroes": true, 00:11:01.462 "zcopy": false, 00:11:01.462 "get_zone_info": false, 00:11:01.462 "zone_management": false, 00:11:01.462 "zone_append": false, 00:11:01.462 "compare": false, 00:11:01.462 "compare_and_write": false, 00:11:01.462 "abort": false, 00:11:01.462 "seek_hole": false, 00:11:01.462 "seek_data": false, 00:11:01.462 "copy": false, 00:11:01.462 "nvme_iov_md": false 00:11:01.462 }, 00:11:01.462 "memory_domains": [ 00:11:01.462 { 00:11:01.462 "dma_device_id": "system", 00:11:01.462 "dma_device_type": 1 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.462 "dma_device_type": 2 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "system", 00:11:01.462 "dma_device_type": 1 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.462 "dma_device_type": 2 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "system", 00:11:01.462 "dma_device_type": 1 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.462 "dma_device_type": 2 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "system", 00:11:01.462 "dma_device_type": 1 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.462 "dma_device_type": 2 00:11:01.462 } 00:11:01.462 ], 00:11:01.462 "driver_specific": { 00:11:01.462 "raid": { 00:11:01.462 "uuid": "11e282d7-3743-4713-9561-52d1ef3d3612", 00:11:01.462 "strip_size_kb": 64, 00:11:01.462 "state": "online", 00:11:01.462 "raid_level": "concat", 00:11:01.462 "superblock": true, 00:11:01.462 "num_base_bdevs": 4, 00:11:01.462 "num_base_bdevs_discovered": 4, 00:11:01.462 "num_base_bdevs_operational": 4, 00:11:01.462 "base_bdevs_list": [ 00:11:01.462 { 00:11:01.462 "name": "NewBaseBdev", 00:11:01.462 "uuid": "82bb137f-c202-4569-9c47-476ca26dd34a", 00:11:01.462 "is_configured": true, 00:11:01.462 "data_offset": 2048, 00:11:01.462 "data_size": 63488 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "name": "BaseBdev2", 00:11:01.462 "uuid": "79403c69-b883-48cc-8660-2be04ca1cb37", 00:11:01.462 "is_configured": true, 00:11:01.462 "data_offset": 2048, 00:11:01.462 "data_size": 63488 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "name": "BaseBdev3", 00:11:01.462 "uuid": "5527cfbe-5a08-4fb1-8895-09ceda207b08", 00:11:01.462 "is_configured": true, 00:11:01.462 "data_offset": 2048, 00:11:01.462 "data_size": 63488 00:11:01.462 }, 00:11:01.462 { 00:11:01.462 "name": "BaseBdev4", 00:11:01.462 "uuid": "fbc75940-83de-4bc3-9808-65775e26a081", 00:11:01.462 "is_configured": true, 00:11:01.462 "data_offset": 2048, 00:11:01.462 "data_size": 63488 00:11:01.462 } 00:11:01.462 ] 00:11:01.462 } 00:11:01.462 } 00:11:01.462 }' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.462 BaseBdev2 00:11:01.462 BaseBdev3 00:11:01.462 BaseBdev4' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.462 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.463 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.723 [2024-12-12 19:39:44.324705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.723 [2024-12-12 19:39:44.324795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.723 [2024-12-12 19:39:44.324934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.723 [2024-12-12 19:39:44.325031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.723 [2024-12-12 19:39:44.325042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73647 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73647 ']' 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73647 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73647 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73647' 00:11:01.723 killing process with pid 73647 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73647 00:11:01.723 [2024-12-12 19:39:44.364034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.723 19:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73647 00:11:01.983 [2024-12-12 19:39:44.790087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.363 ************************************ 00:11:03.363 END TEST raid_state_function_test_sb 00:11:03.363 ************************************ 00:11:03.363 19:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.363 00:11:03.363 real 0m11.704s 00:11:03.364 user 0m18.313s 00:11:03.364 sys 0m2.202s 00:11:03.364 19:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.364 19:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.364 19:39:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:03.364 19:39:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.364 19:39:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.364 19:39:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.364 ************************************ 00:11:03.364 START TEST raid_superblock_test 00:11:03.364 ************************************ 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74317 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74317 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74317 ']' 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.364 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.364 [2024-12-12 19:39:46.172518] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:03.364 [2024-12-12 19:39:46.172723] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74317 ] 00:11:03.623 [2024-12-12 19:39:46.344646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.882 [2024-12-12 19:39:46.486743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.882 [2024-12-12 19:39:46.718692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.882 [2024-12-12 19:39:46.718867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.452 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.452 19:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.452 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:04.452 19:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.452 malloc1 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.452 [2024-12-12 19:39:47.059607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.452 [2024-12-12 19:39:47.059682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.452 [2024-12-12 19:39:47.059707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:04.452 [2024-12-12 19:39:47.059716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.452 [2024-12-12 19:39:47.062186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.452 [2024-12-12 19:39:47.062297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.452 pt1 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.452 malloc2 00:11:04.452 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 [2024-12-12 19:39:47.120888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.453 [2024-12-12 19:39:47.121020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.453 [2024-12-12 19:39:47.121061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:04.453 [2024-12-12 19:39:47.121089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.453 [2024-12-12 19:39:47.123557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.453 [2024-12-12 19:39:47.123645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.453 pt2 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 malloc3 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 [2024-12-12 19:39:47.205462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.453 [2024-12-12 19:39:47.205595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.453 [2024-12-12 19:39:47.205638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:04.453 [2024-12-12 19:39:47.205675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.453 [2024-12-12 19:39:47.208144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.453 [2024-12-12 19:39:47.208219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.453 pt3 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 malloc4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 [2024-12-12 19:39:47.270320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:04.453 [2024-12-12 19:39:47.270398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.453 [2024-12-12 19:39:47.270420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:04.453 [2024-12-12 19:39:47.270430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.453 [2024-12-12 19:39:47.272868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.453 [2024-12-12 19:39:47.272905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:04.453 pt4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 [2024-12-12 19:39:47.282345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.453 [2024-12-12 19:39:47.284632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.453 [2024-12-12 19:39:47.284719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.453 [2024-12-12 19:39:47.284769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:04.453 [2024-12-12 19:39:47.284972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:04.453 [2024-12-12 19:39:47.284992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:04.453 [2024-12-12 19:39:47.285314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.453 [2024-12-12 19:39:47.285509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:04.453 [2024-12-12 19:39:47.285523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:04.453 [2024-12-12 19:39:47.285801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.453 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.712 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.712 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.712 "name": "raid_bdev1", 00:11:04.712 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:04.712 "strip_size_kb": 64, 00:11:04.712 "state": "online", 00:11:04.712 "raid_level": "concat", 00:11:04.712 "superblock": true, 00:11:04.712 "num_base_bdevs": 4, 00:11:04.712 "num_base_bdevs_discovered": 4, 00:11:04.712 "num_base_bdevs_operational": 4, 00:11:04.712 "base_bdevs_list": [ 00:11:04.712 { 00:11:04.712 "name": "pt1", 00:11:04.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.712 "is_configured": true, 00:11:04.712 "data_offset": 2048, 00:11:04.712 "data_size": 63488 00:11:04.712 }, 00:11:04.712 { 00:11:04.712 "name": "pt2", 00:11:04.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.712 "is_configured": true, 00:11:04.712 "data_offset": 2048, 00:11:04.712 "data_size": 63488 00:11:04.712 }, 00:11:04.712 { 00:11:04.712 "name": "pt3", 00:11:04.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.712 "is_configured": true, 00:11:04.712 "data_offset": 2048, 00:11:04.712 "data_size": 63488 00:11:04.712 }, 00:11:04.712 { 00:11:04.712 "name": "pt4", 00:11:04.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.712 "is_configured": true, 00:11:04.712 "data_offset": 2048, 00:11:04.712 "data_size": 63488 00:11:04.712 } 00:11:04.712 ] 00:11:04.712 }' 00:11:04.712 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.712 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 [2024-12-12 19:39:47.698008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.972 "name": "raid_bdev1", 00:11:04.972 "aliases": [ 00:11:04.972 "79aa7bec-9714-425c-b0cb-6ffcb853e512" 00:11:04.972 ], 00:11:04.972 "product_name": "Raid Volume", 00:11:04.972 "block_size": 512, 00:11:04.972 "num_blocks": 253952, 00:11:04.972 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:04.972 "assigned_rate_limits": { 00:11:04.972 "rw_ios_per_sec": 0, 00:11:04.972 "rw_mbytes_per_sec": 0, 00:11:04.972 "r_mbytes_per_sec": 0, 00:11:04.972 "w_mbytes_per_sec": 0 00:11:04.972 }, 00:11:04.972 "claimed": false, 00:11:04.972 "zoned": false, 00:11:04.972 "supported_io_types": { 00:11:04.972 "read": true, 00:11:04.972 "write": true, 00:11:04.972 "unmap": true, 00:11:04.972 "flush": true, 00:11:04.972 "reset": true, 00:11:04.972 "nvme_admin": false, 00:11:04.972 "nvme_io": false, 00:11:04.972 "nvme_io_md": false, 00:11:04.972 "write_zeroes": true, 00:11:04.972 "zcopy": false, 00:11:04.972 "get_zone_info": false, 00:11:04.972 "zone_management": false, 00:11:04.972 "zone_append": false, 00:11:04.972 "compare": false, 00:11:04.972 "compare_and_write": false, 00:11:04.972 "abort": false, 00:11:04.972 "seek_hole": false, 00:11:04.972 "seek_data": false, 00:11:04.972 "copy": false, 00:11:04.972 "nvme_iov_md": false 00:11:04.972 }, 00:11:04.972 "memory_domains": [ 00:11:04.972 { 00:11:04.972 "dma_device_id": "system", 00:11:04.972 "dma_device_type": 1 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.972 "dma_device_type": 2 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "system", 00:11:04.972 "dma_device_type": 1 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.972 "dma_device_type": 2 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "system", 00:11:04.972 "dma_device_type": 1 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.972 "dma_device_type": 2 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "system", 00:11:04.972 "dma_device_type": 1 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.972 "dma_device_type": 2 00:11:04.972 } 00:11:04.972 ], 00:11:04.972 "driver_specific": { 00:11:04.972 "raid": { 00:11:04.972 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:04.972 "strip_size_kb": 64, 00:11:04.972 "state": "online", 00:11:04.972 "raid_level": "concat", 00:11:04.972 "superblock": true, 00:11:04.972 "num_base_bdevs": 4, 00:11:04.972 "num_base_bdevs_discovered": 4, 00:11:04.972 "num_base_bdevs_operational": 4, 00:11:04.972 "base_bdevs_list": [ 00:11:04.972 { 00:11:04.972 "name": "pt1", 00:11:04.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 2048, 00:11:04.972 "data_size": 63488 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "pt2", 00:11:04.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 2048, 00:11:04.972 "data_size": 63488 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "pt3", 00:11:04.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 2048, 00:11:04.972 "data_size": 63488 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "pt4", 00:11:04.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 2048, 00:11:04.972 "data_size": 63488 00:11:04.972 } 00:11:04.972 ] 00:11:04.972 } 00:11:04.972 } 00:11:04.972 }' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:04.972 pt2 00:11:04.972 pt3 00:11:04.972 pt4' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 [2024-12-12 19:39:48.005391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=79aa7bec-9714-425c-b0cb-6ffcb853e512 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 79aa7bec-9714-425c-b0cb-6ffcb853e512 ']' 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 [2024-12-12 19:39:48.048986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.232 [2024-12-12 19:39:48.049053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.232 [2024-12-12 19:39:48.049171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.232 [2024-12-12 19:39:48.049290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.232 [2024-12-12 19:39:48.049340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 [2024-12-12 19:39:48.212763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:05.493 [2024-12-12 19:39:48.215043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:05.493 [2024-12-12 19:39:48.215146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:05.493 [2024-12-12 19:39:48.215219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:05.493 [2024-12-12 19:39:48.215317] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:05.493 [2024-12-12 19:39:48.215439] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:05.493 [2024-12-12 19:39:48.215503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:05.493 [2024-12-12 19:39:48.215581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:05.493 [2024-12-12 19:39:48.215649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.493 [2024-12-12 19:39:48.215695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:05.493 request: 00:11:05.493 { 00:11:05.493 "name": "raid_bdev1", 00:11:05.493 "raid_level": "concat", 00:11:05.493 "base_bdevs": [ 00:11:05.493 "malloc1", 00:11:05.493 "malloc2", 00:11:05.493 "malloc3", 00:11:05.493 "malloc4" 00:11:05.493 ], 00:11:05.493 "strip_size_kb": 64, 00:11:05.493 "superblock": false, 00:11:05.493 "method": "bdev_raid_create", 00:11:05.493 "req_id": 1 00:11:05.493 } 00:11:05.493 Got JSON-RPC error response 00:11:05.493 response: 00:11:05.493 { 00:11:05.493 "code": -17, 00:11:05.493 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:05.493 } 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.493 [2024-12-12 19:39:48.276641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.493 [2024-12-12 19:39:48.276697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.493 [2024-12-12 19:39:48.276716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:05.493 [2024-12-12 19:39:48.276728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.493 [2024-12-12 19:39:48.279226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.493 [2024-12-12 19:39:48.279268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.493 [2024-12-12 19:39:48.279355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:05.493 [2024-12-12 19:39:48.279414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.493 pt1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.493 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.494 "name": "raid_bdev1", 00:11:05.494 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:05.494 "strip_size_kb": 64, 00:11:05.494 "state": "configuring", 00:11:05.494 "raid_level": "concat", 00:11:05.494 "superblock": true, 00:11:05.494 "num_base_bdevs": 4, 00:11:05.494 "num_base_bdevs_discovered": 1, 00:11:05.494 "num_base_bdevs_operational": 4, 00:11:05.494 "base_bdevs_list": [ 00:11:05.494 { 00:11:05.494 "name": "pt1", 00:11:05.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.494 "is_configured": true, 00:11:05.494 "data_offset": 2048, 00:11:05.494 "data_size": 63488 00:11:05.494 }, 00:11:05.494 { 00:11:05.494 "name": null, 00:11:05.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.494 "is_configured": false, 00:11:05.494 "data_offset": 2048, 00:11:05.494 "data_size": 63488 00:11:05.494 }, 00:11:05.494 { 00:11:05.494 "name": null, 00:11:05.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.494 "is_configured": false, 00:11:05.494 "data_offset": 2048, 00:11:05.494 "data_size": 63488 00:11:05.494 }, 00:11:05.494 { 00:11:05.494 "name": null, 00:11:05.494 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.494 "is_configured": false, 00:11:05.494 "data_offset": 2048, 00:11:05.494 "data_size": 63488 00:11:05.494 } 00:11:05.494 ] 00:11:05.494 }' 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.494 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.063 [2024-12-12 19:39:48.763871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.063 [2024-12-12 19:39:48.764054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.063 [2024-12-12 19:39:48.764112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:06.063 [2024-12-12 19:39:48.764153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.063 [2024-12-12 19:39:48.764746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.063 [2024-12-12 19:39:48.764816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.063 [2024-12-12 19:39:48.764958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.063 [2024-12-12 19:39:48.765016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.063 pt2 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.063 [2024-12-12 19:39:48.775840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.063 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.063 "name": "raid_bdev1", 00:11:06.063 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:06.063 "strip_size_kb": 64, 00:11:06.063 "state": "configuring", 00:11:06.063 "raid_level": "concat", 00:11:06.063 "superblock": true, 00:11:06.064 "num_base_bdevs": 4, 00:11:06.064 "num_base_bdevs_discovered": 1, 00:11:06.064 "num_base_bdevs_operational": 4, 00:11:06.064 "base_bdevs_list": [ 00:11:06.064 { 00:11:06.064 "name": "pt1", 00:11:06.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.064 "is_configured": true, 00:11:06.064 "data_offset": 2048, 00:11:06.064 "data_size": 63488 00:11:06.064 }, 00:11:06.064 { 00:11:06.064 "name": null, 00:11:06.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.064 "is_configured": false, 00:11:06.064 "data_offset": 0, 00:11:06.064 "data_size": 63488 00:11:06.064 }, 00:11:06.064 { 00:11:06.064 "name": null, 00:11:06.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.064 "is_configured": false, 00:11:06.064 "data_offset": 2048, 00:11:06.064 "data_size": 63488 00:11:06.064 }, 00:11:06.064 { 00:11:06.064 "name": null, 00:11:06.064 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.064 "is_configured": false, 00:11:06.064 "data_offset": 2048, 00:11:06.064 "data_size": 63488 00:11:06.064 } 00:11:06.064 ] 00:11:06.064 }' 00:11:06.064 19:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.064 19:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.633 [2024-12-12 19:39:49.243083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.633 [2024-12-12 19:39:49.243231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.633 [2024-12-12 19:39:49.243278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:06.633 [2024-12-12 19:39:49.243289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.633 [2024-12-12 19:39:49.243896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.633 [2024-12-12 19:39:49.243918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.633 [2024-12-12 19:39:49.244025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.633 [2024-12-12 19:39:49.244050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.633 pt2 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.633 [2024-12-12 19:39:49.255035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.633 [2024-12-12 19:39:49.255095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.633 [2024-12-12 19:39:49.255117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:06.633 [2024-12-12 19:39:49.255126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.633 [2024-12-12 19:39:49.255595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.633 [2024-12-12 19:39:49.255615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.633 [2024-12-12 19:39:49.255699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.633 [2024-12-12 19:39:49.255728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.633 pt3 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.633 [2024-12-12 19:39:49.266962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:06.633 [2024-12-12 19:39:49.267012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.633 [2024-12-12 19:39:49.267030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:06.633 [2024-12-12 19:39:49.267038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.633 [2024-12-12 19:39:49.267484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.633 [2024-12-12 19:39:49.267500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:06.633 [2024-12-12 19:39:49.267591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:06.633 [2024-12-12 19:39:49.267616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:06.633 [2024-12-12 19:39:49.267754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.633 [2024-12-12 19:39:49.267763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.633 [2024-12-12 19:39:49.268057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:06.633 [2024-12-12 19:39:49.268228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.633 [2024-12-12 19:39:49.268242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:06.633 [2024-12-12 19:39:49.268389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.633 pt4 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.633 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.633 "name": "raid_bdev1", 00:11:06.633 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:06.633 "strip_size_kb": 64, 00:11:06.633 "state": "online", 00:11:06.633 "raid_level": "concat", 00:11:06.633 "superblock": true, 00:11:06.633 "num_base_bdevs": 4, 00:11:06.633 "num_base_bdevs_discovered": 4, 00:11:06.633 "num_base_bdevs_operational": 4, 00:11:06.633 "base_bdevs_list": [ 00:11:06.633 { 00:11:06.633 "name": "pt1", 00:11:06.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.633 "is_configured": true, 00:11:06.633 "data_offset": 2048, 00:11:06.633 "data_size": 63488 00:11:06.633 }, 00:11:06.633 { 00:11:06.633 "name": "pt2", 00:11:06.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.633 "is_configured": true, 00:11:06.633 "data_offset": 2048, 00:11:06.633 "data_size": 63488 00:11:06.634 }, 00:11:06.634 { 00:11:06.634 "name": "pt3", 00:11:06.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.634 "is_configured": true, 00:11:06.634 "data_offset": 2048, 00:11:06.634 "data_size": 63488 00:11:06.634 }, 00:11:06.634 { 00:11:06.634 "name": "pt4", 00:11:06.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.634 "is_configured": true, 00:11:06.634 "data_offset": 2048, 00:11:06.634 "data_size": 63488 00:11:06.634 } 00:11:06.634 ] 00:11:06.634 }' 00:11:06.634 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.634 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 [2024-12-12 19:39:49.762578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.203 "name": "raid_bdev1", 00:11:07.203 "aliases": [ 00:11:07.203 "79aa7bec-9714-425c-b0cb-6ffcb853e512" 00:11:07.203 ], 00:11:07.203 "product_name": "Raid Volume", 00:11:07.203 "block_size": 512, 00:11:07.203 "num_blocks": 253952, 00:11:07.203 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:07.203 "assigned_rate_limits": { 00:11:07.203 "rw_ios_per_sec": 0, 00:11:07.203 "rw_mbytes_per_sec": 0, 00:11:07.203 "r_mbytes_per_sec": 0, 00:11:07.203 "w_mbytes_per_sec": 0 00:11:07.203 }, 00:11:07.203 "claimed": false, 00:11:07.203 "zoned": false, 00:11:07.203 "supported_io_types": { 00:11:07.203 "read": true, 00:11:07.203 "write": true, 00:11:07.203 "unmap": true, 00:11:07.203 "flush": true, 00:11:07.203 "reset": true, 00:11:07.203 "nvme_admin": false, 00:11:07.203 "nvme_io": false, 00:11:07.203 "nvme_io_md": false, 00:11:07.203 "write_zeroes": true, 00:11:07.203 "zcopy": false, 00:11:07.203 "get_zone_info": false, 00:11:07.203 "zone_management": false, 00:11:07.203 "zone_append": false, 00:11:07.203 "compare": false, 00:11:07.203 "compare_and_write": false, 00:11:07.203 "abort": false, 00:11:07.203 "seek_hole": false, 00:11:07.203 "seek_data": false, 00:11:07.203 "copy": false, 00:11:07.203 "nvme_iov_md": false 00:11:07.203 }, 00:11:07.203 "memory_domains": [ 00:11:07.203 { 00:11:07.203 "dma_device_id": "system", 00:11:07.203 "dma_device_type": 1 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.203 "dma_device_type": 2 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "system", 00:11:07.203 "dma_device_type": 1 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.203 "dma_device_type": 2 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "system", 00:11:07.203 "dma_device_type": 1 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.203 "dma_device_type": 2 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "system", 00:11:07.203 "dma_device_type": 1 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.203 "dma_device_type": 2 00:11:07.203 } 00:11:07.203 ], 00:11:07.203 "driver_specific": { 00:11:07.203 "raid": { 00:11:07.203 "uuid": "79aa7bec-9714-425c-b0cb-6ffcb853e512", 00:11:07.203 "strip_size_kb": 64, 00:11:07.203 "state": "online", 00:11:07.203 "raid_level": "concat", 00:11:07.203 "superblock": true, 00:11:07.203 "num_base_bdevs": 4, 00:11:07.203 "num_base_bdevs_discovered": 4, 00:11:07.203 "num_base_bdevs_operational": 4, 00:11:07.203 "base_bdevs_list": [ 00:11:07.203 { 00:11:07.203 "name": "pt1", 00:11:07.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.203 "is_configured": true, 00:11:07.203 "data_offset": 2048, 00:11:07.203 "data_size": 63488 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "name": "pt2", 00:11:07.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.203 "is_configured": true, 00:11:07.203 "data_offset": 2048, 00:11:07.203 "data_size": 63488 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "name": "pt3", 00:11:07.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.203 "is_configured": true, 00:11:07.203 "data_offset": 2048, 00:11:07.203 "data_size": 63488 00:11:07.203 }, 00:11:07.203 { 00:11:07.203 "name": "pt4", 00:11:07.203 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.203 "is_configured": true, 00:11:07.203 "data_offset": 2048, 00:11:07.203 "data_size": 63488 00:11:07.203 } 00:11:07.203 ] 00:11:07.203 } 00:11:07.203 } 00:11:07.203 }' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.203 pt2 00:11:07.203 pt3 00:11:07.203 pt4' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.203 19:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.203 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.203 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.203 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.204 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.204 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.204 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:07.463 [2024-12-12 19:39:50.085985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 79aa7bec-9714-425c-b0cb-6ffcb853e512 '!=' 79aa7bec-9714-425c-b0cb-6ffcb853e512 ']' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74317 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74317 ']' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74317 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74317 00:11:07.463 killing process with pid 74317 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74317' 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74317 00:11:07.463 [2024-12-12 19:39:50.175866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.463 [2024-12-12 19:39:50.175973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.463 19:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74317 00:11:07.463 [2024-12-12 19:39:50.176063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.463 [2024-12-12 19:39:50.176074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:08.031 [2024-12-12 19:39:50.609019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.417 ************************************ 00:11:09.417 END TEST raid_superblock_test 00:11:09.417 ************************************ 00:11:09.417 19:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:09.417 00:11:09.417 real 0m5.760s 00:11:09.417 user 0m8.089s 00:11:09.417 sys 0m1.052s 00:11:09.417 19:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.417 19:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 19:39:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:09.417 19:39:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:09.417 19:39:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.417 19:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 ************************************ 00:11:09.417 START TEST raid_read_error_test 00:11:09.417 ************************************ 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XNPzNQPRXD 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74587 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74587 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74587 ']' 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.417 19:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 [2024-12-12 19:39:52.014509] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:09.417 [2024-12-12 19:39:52.014739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74587 ] 00:11:09.417 [2024-12-12 19:39:52.188082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.683 [2024-12-12 19:39:52.328058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.942 [2024-12-12 19:39:52.563045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.942 [2024-12-12 19:39:52.563248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.201 BaseBdev1_malloc 00:11:10.201 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.202 true 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.202 [2024-12-12 19:39:52.918045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.202 [2024-12-12 19:39:52.918174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.202 [2024-12-12 19:39:52.918214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.202 [2024-12-12 19:39:52.918246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.202 [2024-12-12 19:39:52.920801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.202 [2024-12-12 19:39:52.920881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.202 BaseBdev1 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.202 BaseBdev2_malloc 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.202 true 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.202 [2024-12-12 19:39:52.992937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.202 [2024-12-12 19:39:52.993078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.202 [2024-12-12 19:39:52.993114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.202 [2024-12-12 19:39:52.993144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.202 [2024-12-12 19:39:52.995593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.202 [2024-12-12 19:39:52.995673] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.202 BaseBdev2 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.202 19:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 BaseBdev3_malloc 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 true 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 [2024-12-12 19:39:53.077799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.462 [2024-12-12 19:39:53.077934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.462 [2024-12-12 19:39:53.077957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.462 [2024-12-12 19:39:53.077969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.462 [2024-12-12 19:39:53.080417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.462 [2024-12-12 19:39:53.080457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.462 BaseBdev3 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 BaseBdev4_malloc 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 true 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 [2024-12-12 19:39:53.151459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:10.462 [2024-12-12 19:39:53.151610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.462 [2024-12-12 19:39:53.151636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.462 [2024-12-12 19:39:53.151648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.462 [2024-12-12 19:39:53.154073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.462 BaseBdev4 00:11:10.462 [2024-12-12 19:39:53.154158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.462 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.462 [2024-12-12 19:39:53.163509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.462 [2024-12-12 19:39:53.165669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.462 [2024-12-12 19:39:53.165787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.462 [2024-12-12 19:39:53.165887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.462 [2024-12-12 19:39:53.166188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:10.462 [2024-12-12 19:39:53.166245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.463 [2024-12-12 19:39:53.166571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:10.463 [2024-12-12 19:39:53.166782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:10.463 [2024-12-12 19:39:53.166828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:10.463 [2024-12-12 19:39:53.167050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.463 "name": "raid_bdev1", 00:11:10.463 "uuid": "35e83a58-774c-4606-8a0f-d7f85d6ad590", 00:11:10.463 "strip_size_kb": 64, 00:11:10.463 "state": "online", 00:11:10.463 "raid_level": "concat", 00:11:10.463 "superblock": true, 00:11:10.463 "num_base_bdevs": 4, 00:11:10.463 "num_base_bdevs_discovered": 4, 00:11:10.463 "num_base_bdevs_operational": 4, 00:11:10.463 "base_bdevs_list": [ 00:11:10.463 { 00:11:10.463 "name": "BaseBdev1", 00:11:10.463 "uuid": "f7f56d61-736f-588a-a364-368a9355e712", 00:11:10.463 "is_configured": true, 00:11:10.463 "data_offset": 2048, 00:11:10.463 "data_size": 63488 00:11:10.463 }, 00:11:10.463 { 00:11:10.463 "name": "BaseBdev2", 00:11:10.463 "uuid": "7fe42779-201a-5378-83cc-c0f808f38dbc", 00:11:10.463 "is_configured": true, 00:11:10.463 "data_offset": 2048, 00:11:10.463 "data_size": 63488 00:11:10.463 }, 00:11:10.463 { 00:11:10.463 "name": "BaseBdev3", 00:11:10.463 "uuid": "dddb7276-f943-56b2-be74-606d229fbd21", 00:11:10.463 "is_configured": true, 00:11:10.463 "data_offset": 2048, 00:11:10.463 "data_size": 63488 00:11:10.463 }, 00:11:10.463 { 00:11:10.463 "name": "BaseBdev4", 00:11:10.463 "uuid": "b9954e41-fd15-59ae-a87f-d4a2dcc1beb6", 00:11:10.463 "is_configured": true, 00:11:10.463 "data_offset": 2048, 00:11:10.463 "data_size": 63488 00:11:10.463 } 00:11:10.463 ] 00:11:10.463 }' 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.463 19:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.032 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.032 19:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.032 [2024-12-12 19:39:53.691933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.971 "name": "raid_bdev1", 00:11:11.971 "uuid": "35e83a58-774c-4606-8a0f-d7f85d6ad590", 00:11:11.971 "strip_size_kb": 64, 00:11:11.971 "state": "online", 00:11:11.971 "raid_level": "concat", 00:11:11.971 "superblock": true, 00:11:11.971 "num_base_bdevs": 4, 00:11:11.971 "num_base_bdevs_discovered": 4, 00:11:11.971 "num_base_bdevs_operational": 4, 00:11:11.971 "base_bdevs_list": [ 00:11:11.971 { 00:11:11.971 "name": "BaseBdev1", 00:11:11.971 "uuid": "f7f56d61-736f-588a-a364-368a9355e712", 00:11:11.971 "is_configured": true, 00:11:11.971 "data_offset": 2048, 00:11:11.971 "data_size": 63488 00:11:11.971 }, 00:11:11.971 { 00:11:11.971 "name": "BaseBdev2", 00:11:11.971 "uuid": "7fe42779-201a-5378-83cc-c0f808f38dbc", 00:11:11.971 "is_configured": true, 00:11:11.971 "data_offset": 2048, 00:11:11.971 "data_size": 63488 00:11:11.971 }, 00:11:11.971 { 00:11:11.971 "name": "BaseBdev3", 00:11:11.971 "uuid": "dddb7276-f943-56b2-be74-606d229fbd21", 00:11:11.971 "is_configured": true, 00:11:11.971 "data_offset": 2048, 00:11:11.971 "data_size": 63488 00:11:11.971 }, 00:11:11.971 { 00:11:11.971 "name": "BaseBdev4", 00:11:11.971 "uuid": "b9954e41-fd15-59ae-a87f-d4a2dcc1beb6", 00:11:11.971 "is_configured": true, 00:11:11.971 "data_offset": 2048, 00:11:11.971 "data_size": 63488 00:11:11.971 } 00:11:11.971 ] 00:11:11.971 }' 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.971 19:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.231 [2024-12-12 19:39:55.040991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.231 [2024-12-12 19:39:55.041104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.231 [2024-12-12 19:39:55.043944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.231 [2024-12-12 19:39:55.044057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.231 [2024-12-12 19:39:55.044127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.231 [2024-12-12 19:39:55.044204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:12.231 { 00:11:12.231 "results": [ 00:11:12.231 { 00:11:12.231 "job": "raid_bdev1", 00:11:12.231 "core_mask": "0x1", 00:11:12.231 "workload": "randrw", 00:11:12.231 "percentage": 50, 00:11:12.231 "status": "finished", 00:11:12.231 "queue_depth": 1, 00:11:12.231 "io_size": 131072, 00:11:12.231 "runtime": 1.349539, 00:11:12.231 "iops": 13227.4799023963, 00:11:12.231 "mibps": 1653.4349877995376, 00:11:12.231 "io_failed": 1, 00:11:12.231 "io_timeout": 0, 00:11:12.231 "avg_latency_us": 106.35531781450003, 00:11:12.231 "min_latency_us": 27.165065502183406, 00:11:12.231 "max_latency_us": 1359.3711790393013 00:11:12.231 } 00:11:12.231 ], 00:11:12.231 "core_count": 1 00:11:12.231 } 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74587 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74587 ']' 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74587 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.231 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74587 00:11:12.489 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.489 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.489 killing process with pid 74587 00:11:12.489 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74587' 00:11:12.489 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74587 00:11:12.489 [2024-12-12 19:39:55.091860] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.489 19:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74587 00:11:12.747 [2024-12-12 19:39:55.448771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XNPzNQPRXD 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:14.127 00:11:14.127 real 0m4.852s 00:11:14.127 user 0m5.546s 00:11:14.127 sys 0m0.685s 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.127 19:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.127 ************************************ 00:11:14.127 END TEST raid_read_error_test 00:11:14.127 ************************************ 00:11:14.127 19:39:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:14.127 19:39:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.127 19:39:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.127 19:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.127 ************************************ 00:11:14.127 START TEST raid_write_error_test 00:11:14.127 ************************************ 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2fe7HSxM9y 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74733 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74733 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74733 ']' 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.127 19:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.127 [2024-12-12 19:39:56.941960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:14.127 [2024-12-12 19:39:56.942496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74733 ] 00:11:14.387 [2024-12-12 19:39:57.114444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.646 [2024-12-12 19:39:57.255691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.906 [2024-12-12 19:39:57.497018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.906 [2024-12-12 19:39:57.497136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 BaseBdev1_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 true 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 [2024-12-12 19:39:57.831758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.165 [2024-12-12 19:39:57.831830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.165 [2024-12-12 19:39:57.831851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:15.165 [2024-12-12 19:39:57.831863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.165 [2024-12-12 19:39:57.834281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.165 BaseBdev1 00:11:15.165 [2024-12-12 19:39:57.834412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 BaseBdev2_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 true 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.165 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.165 [2024-12-12 19:39:57.895929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.165 [2024-12-12 19:39:57.896077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.165 [2024-12-12 19:39:57.896111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:15.166 [2024-12-12 19:39:57.896125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.166 [2024-12-12 19:39:57.898599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.166 [2024-12-12 19:39:57.898640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.166 BaseBdev2 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 BaseBdev3_malloc 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 true 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 [2024-12-12 19:39:57.969496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:15.166 [2024-12-12 19:39:57.969636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.166 [2024-12-12 19:39:57.969680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:15.166 [2024-12-12 19:39:57.969692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.166 [2024-12-12 19:39:57.972117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.166 [2024-12-12 19:39:57.972160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:15.166 BaseBdev3 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.166 19:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.425 BaseBdev4_malloc 00:11:15.425 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.425 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 true 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 [2024-12-12 19:39:58.041907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:15.426 [2024-12-12 19:39:58.042049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.426 [2024-12-12 19:39:58.042086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.426 [2024-12-12 19:39:58.042120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.426 [2024-12-12 19:39:58.044630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.426 [2024-12-12 19:39:58.044720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:15.426 BaseBdev4 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 [2024-12-12 19:39:58.053959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.426 [2024-12-12 19:39:58.056062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.426 [2024-12-12 19:39:58.056179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.426 [2024-12-12 19:39:58.056278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.426 [2024-12-12 19:39:58.056571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:15.426 [2024-12-12 19:39:58.056622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.426 [2024-12-12 19:39:58.056923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:15.426 [2024-12-12 19:39:58.057141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:15.426 [2024-12-12 19:39:58.057183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:15.426 [2024-12-12 19:39:58.057406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.426 "name": "raid_bdev1", 00:11:15.426 "uuid": "41e1cc54-ec67-4b83-a155-967377d1e24a", 00:11:15.426 "strip_size_kb": 64, 00:11:15.426 "state": "online", 00:11:15.426 "raid_level": "concat", 00:11:15.426 "superblock": true, 00:11:15.426 "num_base_bdevs": 4, 00:11:15.426 "num_base_bdevs_discovered": 4, 00:11:15.426 "num_base_bdevs_operational": 4, 00:11:15.426 "base_bdevs_list": [ 00:11:15.426 { 00:11:15.426 "name": "BaseBdev1", 00:11:15.426 "uuid": "644e2d8c-5b63-521a-b2b6-53cc42a2516c", 00:11:15.426 "is_configured": true, 00:11:15.426 "data_offset": 2048, 00:11:15.426 "data_size": 63488 00:11:15.426 }, 00:11:15.426 { 00:11:15.426 "name": "BaseBdev2", 00:11:15.426 "uuid": "9854e6de-bff8-5dbf-b826-b408671c5c83", 00:11:15.426 "is_configured": true, 00:11:15.426 "data_offset": 2048, 00:11:15.426 "data_size": 63488 00:11:15.426 }, 00:11:15.426 { 00:11:15.426 "name": "BaseBdev3", 00:11:15.426 "uuid": "366947b0-5805-5a2b-8b52-6bfc56375a61", 00:11:15.426 "is_configured": true, 00:11:15.426 "data_offset": 2048, 00:11:15.426 "data_size": 63488 00:11:15.426 }, 00:11:15.426 { 00:11:15.426 "name": "BaseBdev4", 00:11:15.426 "uuid": "d2e2210c-c6fa-5d60-812e-4af56e47c8f1", 00:11:15.426 "is_configured": true, 00:11:15.426 "data_offset": 2048, 00:11:15.426 "data_size": 63488 00:11:15.426 } 00:11:15.426 ] 00:11:15.426 }' 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.426 19:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.685 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.685 19:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:15.944 [2024-12-12 19:39:58.538718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.883 "name": "raid_bdev1", 00:11:16.883 "uuid": "41e1cc54-ec67-4b83-a155-967377d1e24a", 00:11:16.883 "strip_size_kb": 64, 00:11:16.883 "state": "online", 00:11:16.883 "raid_level": "concat", 00:11:16.883 "superblock": true, 00:11:16.883 "num_base_bdevs": 4, 00:11:16.883 "num_base_bdevs_discovered": 4, 00:11:16.883 "num_base_bdevs_operational": 4, 00:11:16.883 "base_bdevs_list": [ 00:11:16.883 { 00:11:16.883 "name": "BaseBdev1", 00:11:16.883 "uuid": "644e2d8c-5b63-521a-b2b6-53cc42a2516c", 00:11:16.883 "is_configured": true, 00:11:16.883 "data_offset": 2048, 00:11:16.883 "data_size": 63488 00:11:16.883 }, 00:11:16.883 { 00:11:16.883 "name": "BaseBdev2", 00:11:16.883 "uuid": "9854e6de-bff8-5dbf-b826-b408671c5c83", 00:11:16.883 "is_configured": true, 00:11:16.883 "data_offset": 2048, 00:11:16.883 "data_size": 63488 00:11:16.883 }, 00:11:16.883 { 00:11:16.883 "name": "BaseBdev3", 00:11:16.883 "uuid": "366947b0-5805-5a2b-8b52-6bfc56375a61", 00:11:16.883 "is_configured": true, 00:11:16.883 "data_offset": 2048, 00:11:16.883 "data_size": 63488 00:11:16.883 }, 00:11:16.883 { 00:11:16.883 "name": "BaseBdev4", 00:11:16.883 "uuid": "d2e2210c-c6fa-5d60-812e-4af56e47c8f1", 00:11:16.883 "is_configured": true, 00:11:16.883 "data_offset": 2048, 00:11:16.883 "data_size": 63488 00:11:16.883 } 00:11:16.883 ] 00:11:16.883 }' 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.883 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.143 [2024-12-12 19:39:59.891473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.143 [2024-12-12 19:39:59.891526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.143 [2024-12-12 19:39:59.894278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.143 { 00:11:17.143 "results": [ 00:11:17.143 { 00:11:17.143 "job": "raid_bdev1", 00:11:17.143 "core_mask": "0x1", 00:11:17.143 "workload": "randrw", 00:11:17.143 "percentage": 50, 00:11:17.143 "status": "finished", 00:11:17.143 "queue_depth": 1, 00:11:17.143 "io_size": 131072, 00:11:17.143 "runtime": 1.353308, 00:11:17.143 "iops": 13113.053347796658, 00:11:17.143 "mibps": 1639.1316684745823, 00:11:17.143 "io_failed": 1, 00:11:17.143 "io_timeout": 0, 00:11:17.143 "avg_latency_us": 107.0963020012239, 00:11:17.143 "min_latency_us": 27.388646288209607, 00:11:17.143 "max_latency_us": 1545.3903930131005 00:11:17.143 } 00:11:17.143 ], 00:11:17.143 "core_count": 1 00:11:17.143 } 00:11:17.143 [2024-12-12 19:39:59.894430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.143 [2024-12-12 19:39:59.894485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.143 [2024-12-12 19:39:59.894499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74733 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74733 ']' 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74733 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74733 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74733' 00:11:17.143 killing process with pid 74733 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74733 00:11:17.143 [2024-12-12 19:39:59.941801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.143 19:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74733 00:11:17.712 [2024-12-12 19:40:00.297751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2fe7HSxM9y 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:19.112 00:11:19.112 real 0m4.777s 00:11:19.112 user 0m5.412s 00:11:19.112 sys 0m0.692s 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.112 ************************************ 00:11:19.112 END TEST raid_write_error_test 00:11:19.112 ************************************ 00:11:19.112 19:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.112 19:40:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:19.112 19:40:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:19.112 19:40:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.112 19:40:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.112 19:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.112 ************************************ 00:11:19.112 START TEST raid_state_function_test 00:11:19.113 ************************************ 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:19.113 Process raid pid: 74881 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74881 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74881' 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74881 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74881 ']' 00:11:19.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.113 19:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.113 [2024-12-12 19:40:01.791329] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:19.113 [2024-12-12 19:40:01.791465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.394 [2024-12-12 19:40:01.971145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.394 [2024-12-12 19:40:02.112283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.654 [2024-12-12 19:40:02.359396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.654 [2024-12-12 19:40:02.359450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.914 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.914 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.915 [2024-12-12 19:40:02.603945] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.915 [2024-12-12 19:40:02.604094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.915 [2024-12-12 19:40:02.604126] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.915 [2024-12-12 19:40:02.604152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.915 [2024-12-12 19:40:02.604171] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.915 [2024-12-12 19:40:02.604193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.915 [2024-12-12 19:40:02.604210] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.915 [2024-12-12 19:40:02.604254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.915 "name": "Existed_Raid", 00:11:19.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.915 "strip_size_kb": 0, 00:11:19.915 "state": "configuring", 00:11:19.915 "raid_level": "raid1", 00:11:19.915 "superblock": false, 00:11:19.915 "num_base_bdevs": 4, 00:11:19.915 "num_base_bdevs_discovered": 0, 00:11:19.915 "num_base_bdevs_operational": 4, 00:11:19.915 "base_bdevs_list": [ 00:11:19.915 { 00:11:19.915 "name": "BaseBdev1", 00:11:19.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.915 "is_configured": false, 00:11:19.915 "data_offset": 0, 00:11:19.915 "data_size": 0 00:11:19.915 }, 00:11:19.915 { 00:11:19.915 "name": "BaseBdev2", 00:11:19.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.915 "is_configured": false, 00:11:19.915 "data_offset": 0, 00:11:19.915 "data_size": 0 00:11:19.915 }, 00:11:19.915 { 00:11:19.915 "name": "BaseBdev3", 00:11:19.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.915 "is_configured": false, 00:11:19.915 "data_offset": 0, 00:11:19.915 "data_size": 0 00:11:19.915 }, 00:11:19.915 { 00:11:19.915 "name": "BaseBdev4", 00:11:19.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.915 "is_configured": false, 00:11:19.915 "data_offset": 0, 00:11:19.915 "data_size": 0 00:11:19.915 } 00:11:19.915 ] 00:11:19.915 }' 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.915 19:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.484 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.484 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.484 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.484 [2024-12-12 19:40:03.059134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.484 [2024-12-12 19:40:03.059263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:20.484 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.485 [2024-12-12 19:40:03.071066] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.485 [2024-12-12 19:40:03.071154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.485 [2024-12-12 19:40:03.071181] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.485 [2024-12-12 19:40:03.071204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.485 [2024-12-12 19:40:03.071221] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.485 [2024-12-12 19:40:03.071242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.485 [2024-12-12 19:40:03.071258] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.485 [2024-12-12 19:40:03.071279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.485 [2024-12-12 19:40:03.125859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.485 BaseBdev1 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.485 [ 00:11:20.485 { 00:11:20.485 "name": "BaseBdev1", 00:11:20.485 "aliases": [ 00:11:20.485 "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c" 00:11:20.485 ], 00:11:20.485 "product_name": "Malloc disk", 00:11:20.485 "block_size": 512, 00:11:20.485 "num_blocks": 65536, 00:11:20.485 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:20.485 "assigned_rate_limits": { 00:11:20.485 "rw_ios_per_sec": 0, 00:11:20.485 "rw_mbytes_per_sec": 0, 00:11:20.485 "r_mbytes_per_sec": 0, 00:11:20.485 "w_mbytes_per_sec": 0 00:11:20.485 }, 00:11:20.485 "claimed": true, 00:11:20.485 "claim_type": "exclusive_write", 00:11:20.485 "zoned": false, 00:11:20.485 "supported_io_types": { 00:11:20.485 "read": true, 00:11:20.485 "write": true, 00:11:20.485 "unmap": true, 00:11:20.485 "flush": true, 00:11:20.485 "reset": true, 00:11:20.485 "nvme_admin": false, 00:11:20.485 "nvme_io": false, 00:11:20.485 "nvme_io_md": false, 00:11:20.485 "write_zeroes": true, 00:11:20.485 "zcopy": true, 00:11:20.485 "get_zone_info": false, 00:11:20.485 "zone_management": false, 00:11:20.485 "zone_append": false, 00:11:20.485 "compare": false, 00:11:20.485 "compare_and_write": false, 00:11:20.485 "abort": true, 00:11:20.485 "seek_hole": false, 00:11:20.485 "seek_data": false, 00:11:20.485 "copy": true, 00:11:20.485 "nvme_iov_md": false 00:11:20.485 }, 00:11:20.485 "memory_domains": [ 00:11:20.485 { 00:11:20.485 "dma_device_id": "system", 00:11:20.485 "dma_device_type": 1 00:11:20.485 }, 00:11:20.485 { 00:11:20.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.485 "dma_device_type": 2 00:11:20.485 } 00:11:20.485 ], 00:11:20.485 "driver_specific": {} 00:11:20.485 } 00:11:20.485 ] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.485 "name": "Existed_Raid", 00:11:20.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.485 "strip_size_kb": 0, 00:11:20.485 "state": "configuring", 00:11:20.485 "raid_level": "raid1", 00:11:20.485 "superblock": false, 00:11:20.485 "num_base_bdevs": 4, 00:11:20.485 "num_base_bdevs_discovered": 1, 00:11:20.485 "num_base_bdevs_operational": 4, 00:11:20.485 "base_bdevs_list": [ 00:11:20.485 { 00:11:20.485 "name": "BaseBdev1", 00:11:20.485 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:20.485 "is_configured": true, 00:11:20.485 "data_offset": 0, 00:11:20.485 "data_size": 65536 00:11:20.485 }, 00:11:20.485 { 00:11:20.485 "name": "BaseBdev2", 00:11:20.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.485 "is_configured": false, 00:11:20.485 "data_offset": 0, 00:11:20.485 "data_size": 0 00:11:20.485 }, 00:11:20.485 { 00:11:20.485 "name": "BaseBdev3", 00:11:20.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.485 "is_configured": false, 00:11:20.485 "data_offset": 0, 00:11:20.485 "data_size": 0 00:11:20.485 }, 00:11:20.485 { 00:11:20.485 "name": "BaseBdev4", 00:11:20.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.485 "is_configured": false, 00:11:20.485 "data_offset": 0, 00:11:20.485 "data_size": 0 00:11:20.485 } 00:11:20.485 ] 00:11:20.485 }' 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.485 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.744 [2024-12-12 19:40:03.577287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.744 [2024-12-12 19:40:03.577434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.744 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.004 [2024-12-12 19:40:03.589305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.004 [2024-12-12 19:40:03.591534] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.004 [2024-12-12 19:40:03.591636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.004 [2024-12-12 19:40:03.591666] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.004 [2024-12-12 19:40:03.591690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.004 [2024-12-12 19:40:03.591707] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.004 [2024-12-12 19:40:03.591728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.004 "name": "Existed_Raid", 00:11:21.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.004 "strip_size_kb": 0, 00:11:21.004 "state": "configuring", 00:11:21.004 "raid_level": "raid1", 00:11:21.004 "superblock": false, 00:11:21.004 "num_base_bdevs": 4, 00:11:21.004 "num_base_bdevs_discovered": 1, 00:11:21.004 "num_base_bdevs_operational": 4, 00:11:21.004 "base_bdevs_list": [ 00:11:21.004 { 00:11:21.004 "name": "BaseBdev1", 00:11:21.004 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:21.004 "is_configured": true, 00:11:21.004 "data_offset": 0, 00:11:21.004 "data_size": 65536 00:11:21.004 }, 00:11:21.004 { 00:11:21.004 "name": "BaseBdev2", 00:11:21.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.004 "is_configured": false, 00:11:21.004 "data_offset": 0, 00:11:21.004 "data_size": 0 00:11:21.004 }, 00:11:21.004 { 00:11:21.004 "name": "BaseBdev3", 00:11:21.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.004 "is_configured": false, 00:11:21.004 "data_offset": 0, 00:11:21.004 "data_size": 0 00:11:21.004 }, 00:11:21.004 { 00:11:21.004 "name": "BaseBdev4", 00:11:21.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.004 "is_configured": false, 00:11:21.004 "data_offset": 0, 00:11:21.004 "data_size": 0 00:11:21.004 } 00:11:21.004 ] 00:11:21.004 }' 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.004 19:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.264 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.264 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.523 [2024-12-12 19:40:04.116853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.523 BaseBdev2 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.523 [ 00:11:21.523 { 00:11:21.523 "name": "BaseBdev2", 00:11:21.523 "aliases": [ 00:11:21.523 "b69b3926-865b-4a63-bac8-9b2ab5f2a652" 00:11:21.523 ], 00:11:21.523 "product_name": "Malloc disk", 00:11:21.523 "block_size": 512, 00:11:21.523 "num_blocks": 65536, 00:11:21.523 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:21.523 "assigned_rate_limits": { 00:11:21.523 "rw_ios_per_sec": 0, 00:11:21.523 "rw_mbytes_per_sec": 0, 00:11:21.523 "r_mbytes_per_sec": 0, 00:11:21.523 "w_mbytes_per_sec": 0 00:11:21.523 }, 00:11:21.523 "claimed": true, 00:11:21.523 "claim_type": "exclusive_write", 00:11:21.523 "zoned": false, 00:11:21.523 "supported_io_types": { 00:11:21.523 "read": true, 00:11:21.523 "write": true, 00:11:21.523 "unmap": true, 00:11:21.523 "flush": true, 00:11:21.523 "reset": true, 00:11:21.523 "nvme_admin": false, 00:11:21.523 "nvme_io": false, 00:11:21.523 "nvme_io_md": false, 00:11:21.523 "write_zeroes": true, 00:11:21.523 "zcopy": true, 00:11:21.523 "get_zone_info": false, 00:11:21.523 "zone_management": false, 00:11:21.523 "zone_append": false, 00:11:21.523 "compare": false, 00:11:21.523 "compare_and_write": false, 00:11:21.523 "abort": true, 00:11:21.523 "seek_hole": false, 00:11:21.523 "seek_data": false, 00:11:21.523 "copy": true, 00:11:21.523 "nvme_iov_md": false 00:11:21.523 }, 00:11:21.523 "memory_domains": [ 00:11:21.523 { 00:11:21.523 "dma_device_id": "system", 00:11:21.523 "dma_device_type": 1 00:11:21.523 }, 00:11:21.523 { 00:11:21.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.523 "dma_device_type": 2 00:11:21.523 } 00:11:21.523 ], 00:11:21.523 "driver_specific": {} 00:11:21.523 } 00:11:21.523 ] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.523 "name": "Existed_Raid", 00:11:21.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.523 "strip_size_kb": 0, 00:11:21.523 "state": "configuring", 00:11:21.523 "raid_level": "raid1", 00:11:21.523 "superblock": false, 00:11:21.523 "num_base_bdevs": 4, 00:11:21.523 "num_base_bdevs_discovered": 2, 00:11:21.523 "num_base_bdevs_operational": 4, 00:11:21.523 "base_bdevs_list": [ 00:11:21.523 { 00:11:21.523 "name": "BaseBdev1", 00:11:21.523 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:21.523 "is_configured": true, 00:11:21.523 "data_offset": 0, 00:11:21.523 "data_size": 65536 00:11:21.523 }, 00:11:21.523 { 00:11:21.523 "name": "BaseBdev2", 00:11:21.523 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:21.523 "is_configured": true, 00:11:21.523 "data_offset": 0, 00:11:21.523 "data_size": 65536 00:11:21.523 }, 00:11:21.523 { 00:11:21.523 "name": "BaseBdev3", 00:11:21.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.523 "is_configured": false, 00:11:21.523 "data_offset": 0, 00:11:21.523 "data_size": 0 00:11:21.523 }, 00:11:21.523 { 00:11:21.523 "name": "BaseBdev4", 00:11:21.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.523 "is_configured": false, 00:11:21.523 "data_offset": 0, 00:11:21.523 "data_size": 0 00:11:21.523 } 00:11:21.523 ] 00:11:21.523 }' 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.523 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.782 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.782 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.782 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.782 [2024-12-12 19:40:04.618291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.783 BaseBdev3 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.783 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 [ 00:11:22.042 { 00:11:22.042 "name": "BaseBdev3", 00:11:22.042 "aliases": [ 00:11:22.042 "b2cd839d-6e5d-40b1-af54-9032919070c6" 00:11:22.042 ], 00:11:22.042 "product_name": "Malloc disk", 00:11:22.042 "block_size": 512, 00:11:22.042 "num_blocks": 65536, 00:11:22.042 "uuid": "b2cd839d-6e5d-40b1-af54-9032919070c6", 00:11:22.042 "assigned_rate_limits": { 00:11:22.042 "rw_ios_per_sec": 0, 00:11:22.042 "rw_mbytes_per_sec": 0, 00:11:22.042 "r_mbytes_per_sec": 0, 00:11:22.042 "w_mbytes_per_sec": 0 00:11:22.042 }, 00:11:22.042 "claimed": true, 00:11:22.042 "claim_type": "exclusive_write", 00:11:22.042 "zoned": false, 00:11:22.042 "supported_io_types": { 00:11:22.042 "read": true, 00:11:22.042 "write": true, 00:11:22.042 "unmap": true, 00:11:22.042 "flush": true, 00:11:22.042 "reset": true, 00:11:22.042 "nvme_admin": false, 00:11:22.042 "nvme_io": false, 00:11:22.042 "nvme_io_md": false, 00:11:22.042 "write_zeroes": true, 00:11:22.042 "zcopy": true, 00:11:22.042 "get_zone_info": false, 00:11:22.042 "zone_management": false, 00:11:22.042 "zone_append": false, 00:11:22.042 "compare": false, 00:11:22.042 "compare_and_write": false, 00:11:22.042 "abort": true, 00:11:22.042 "seek_hole": false, 00:11:22.042 "seek_data": false, 00:11:22.042 "copy": true, 00:11:22.042 "nvme_iov_md": false 00:11:22.042 }, 00:11:22.042 "memory_domains": [ 00:11:22.042 { 00:11:22.042 "dma_device_id": "system", 00:11:22.042 "dma_device_type": 1 00:11:22.042 }, 00:11:22.042 { 00:11:22.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.042 "dma_device_type": 2 00:11:22.042 } 00:11:22.042 ], 00:11:22.042 "driver_specific": {} 00:11:22.042 } 00:11:22.042 ] 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.042 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.043 "name": "Existed_Raid", 00:11:22.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.043 "strip_size_kb": 0, 00:11:22.043 "state": "configuring", 00:11:22.043 "raid_level": "raid1", 00:11:22.043 "superblock": false, 00:11:22.043 "num_base_bdevs": 4, 00:11:22.043 "num_base_bdevs_discovered": 3, 00:11:22.043 "num_base_bdevs_operational": 4, 00:11:22.043 "base_bdevs_list": [ 00:11:22.043 { 00:11:22.043 "name": "BaseBdev1", 00:11:22.043 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:22.043 "is_configured": true, 00:11:22.043 "data_offset": 0, 00:11:22.043 "data_size": 65536 00:11:22.043 }, 00:11:22.043 { 00:11:22.043 "name": "BaseBdev2", 00:11:22.043 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:22.043 "is_configured": true, 00:11:22.043 "data_offset": 0, 00:11:22.043 "data_size": 65536 00:11:22.043 }, 00:11:22.043 { 00:11:22.043 "name": "BaseBdev3", 00:11:22.043 "uuid": "b2cd839d-6e5d-40b1-af54-9032919070c6", 00:11:22.043 "is_configured": true, 00:11:22.043 "data_offset": 0, 00:11:22.043 "data_size": 65536 00:11:22.043 }, 00:11:22.043 { 00:11:22.043 "name": "BaseBdev4", 00:11:22.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.043 "is_configured": false, 00:11:22.043 "data_offset": 0, 00:11:22.043 "data_size": 0 00:11:22.043 } 00:11:22.043 ] 00:11:22.043 }' 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.043 19:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.302 [2024-12-12 19:40:05.137391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.302 [2024-12-12 19:40:05.137567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:22.302 [2024-12-12 19:40:05.137611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:22.302 [2024-12-12 19:40:05.138010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:22.302 [2024-12-12 19:40:05.138290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:22.302 [2024-12-12 19:40:05.138337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:22.302 [2024-12-12 19:40:05.138761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.302 BaseBdev4 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.302 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 [ 00:11:22.562 { 00:11:22.562 "name": "BaseBdev4", 00:11:22.562 "aliases": [ 00:11:22.562 "47357e1c-e9bb-4709-b788-576065a2b78a" 00:11:22.562 ], 00:11:22.562 "product_name": "Malloc disk", 00:11:22.562 "block_size": 512, 00:11:22.562 "num_blocks": 65536, 00:11:22.562 "uuid": "47357e1c-e9bb-4709-b788-576065a2b78a", 00:11:22.562 "assigned_rate_limits": { 00:11:22.562 "rw_ios_per_sec": 0, 00:11:22.562 "rw_mbytes_per_sec": 0, 00:11:22.562 "r_mbytes_per_sec": 0, 00:11:22.562 "w_mbytes_per_sec": 0 00:11:22.562 }, 00:11:22.562 "claimed": true, 00:11:22.562 "claim_type": "exclusive_write", 00:11:22.562 "zoned": false, 00:11:22.562 "supported_io_types": { 00:11:22.562 "read": true, 00:11:22.562 "write": true, 00:11:22.562 "unmap": true, 00:11:22.562 "flush": true, 00:11:22.562 "reset": true, 00:11:22.562 "nvme_admin": false, 00:11:22.562 "nvme_io": false, 00:11:22.562 "nvme_io_md": false, 00:11:22.562 "write_zeroes": true, 00:11:22.562 "zcopy": true, 00:11:22.562 "get_zone_info": false, 00:11:22.562 "zone_management": false, 00:11:22.562 "zone_append": false, 00:11:22.562 "compare": false, 00:11:22.562 "compare_and_write": false, 00:11:22.562 "abort": true, 00:11:22.562 "seek_hole": false, 00:11:22.562 "seek_data": false, 00:11:22.562 "copy": true, 00:11:22.562 "nvme_iov_md": false 00:11:22.562 }, 00:11:22.562 "memory_domains": [ 00:11:22.562 { 00:11:22.562 "dma_device_id": "system", 00:11:22.562 "dma_device_type": 1 00:11:22.562 }, 00:11:22.562 { 00:11:22.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.562 "dma_device_type": 2 00:11:22.562 } 00:11:22.562 ], 00:11:22.562 "driver_specific": {} 00:11:22.562 } 00:11:22.562 ] 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.562 "name": "Existed_Raid", 00:11:22.562 "uuid": "d90a7ce5-9a33-4c2e-a8ee-30c0660d25d4", 00:11:22.562 "strip_size_kb": 0, 00:11:22.562 "state": "online", 00:11:22.562 "raid_level": "raid1", 00:11:22.562 "superblock": false, 00:11:22.562 "num_base_bdevs": 4, 00:11:22.562 "num_base_bdevs_discovered": 4, 00:11:22.562 "num_base_bdevs_operational": 4, 00:11:22.562 "base_bdevs_list": [ 00:11:22.562 { 00:11:22.562 "name": "BaseBdev1", 00:11:22.562 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:22.562 "is_configured": true, 00:11:22.562 "data_offset": 0, 00:11:22.562 "data_size": 65536 00:11:22.562 }, 00:11:22.562 { 00:11:22.562 "name": "BaseBdev2", 00:11:22.562 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:22.562 "is_configured": true, 00:11:22.562 "data_offset": 0, 00:11:22.562 "data_size": 65536 00:11:22.562 }, 00:11:22.562 { 00:11:22.562 "name": "BaseBdev3", 00:11:22.562 "uuid": "b2cd839d-6e5d-40b1-af54-9032919070c6", 00:11:22.562 "is_configured": true, 00:11:22.562 "data_offset": 0, 00:11:22.562 "data_size": 65536 00:11:22.562 }, 00:11:22.562 { 00:11:22.562 "name": "BaseBdev4", 00:11:22.562 "uuid": "47357e1c-e9bb-4709-b788-576065a2b78a", 00:11:22.562 "is_configured": true, 00:11:22.562 "data_offset": 0, 00:11:22.562 "data_size": 65536 00:11:22.562 } 00:11:22.562 ] 00:11:22.562 }' 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.562 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.821 [2024-12-12 19:40:05.637077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.821 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.080 "name": "Existed_Raid", 00:11:23.080 "aliases": [ 00:11:23.080 "d90a7ce5-9a33-4c2e-a8ee-30c0660d25d4" 00:11:23.080 ], 00:11:23.080 "product_name": "Raid Volume", 00:11:23.080 "block_size": 512, 00:11:23.080 "num_blocks": 65536, 00:11:23.080 "uuid": "d90a7ce5-9a33-4c2e-a8ee-30c0660d25d4", 00:11:23.080 "assigned_rate_limits": { 00:11:23.080 "rw_ios_per_sec": 0, 00:11:23.080 "rw_mbytes_per_sec": 0, 00:11:23.080 "r_mbytes_per_sec": 0, 00:11:23.080 "w_mbytes_per_sec": 0 00:11:23.080 }, 00:11:23.080 "claimed": false, 00:11:23.080 "zoned": false, 00:11:23.080 "supported_io_types": { 00:11:23.080 "read": true, 00:11:23.080 "write": true, 00:11:23.080 "unmap": false, 00:11:23.080 "flush": false, 00:11:23.080 "reset": true, 00:11:23.080 "nvme_admin": false, 00:11:23.080 "nvme_io": false, 00:11:23.080 "nvme_io_md": false, 00:11:23.080 "write_zeroes": true, 00:11:23.080 "zcopy": false, 00:11:23.080 "get_zone_info": false, 00:11:23.080 "zone_management": false, 00:11:23.080 "zone_append": false, 00:11:23.080 "compare": false, 00:11:23.080 "compare_and_write": false, 00:11:23.080 "abort": false, 00:11:23.080 "seek_hole": false, 00:11:23.080 "seek_data": false, 00:11:23.080 "copy": false, 00:11:23.080 "nvme_iov_md": false 00:11:23.080 }, 00:11:23.080 "memory_domains": [ 00:11:23.080 { 00:11:23.080 "dma_device_id": "system", 00:11:23.080 "dma_device_type": 1 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.080 "dma_device_type": 2 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "system", 00:11:23.080 "dma_device_type": 1 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.080 "dma_device_type": 2 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "system", 00:11:23.080 "dma_device_type": 1 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.080 "dma_device_type": 2 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "system", 00:11:23.080 "dma_device_type": 1 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.080 "dma_device_type": 2 00:11:23.080 } 00:11:23.080 ], 00:11:23.080 "driver_specific": { 00:11:23.080 "raid": { 00:11:23.080 "uuid": "d90a7ce5-9a33-4c2e-a8ee-30c0660d25d4", 00:11:23.080 "strip_size_kb": 0, 00:11:23.080 "state": "online", 00:11:23.080 "raid_level": "raid1", 00:11:23.080 "superblock": false, 00:11:23.080 "num_base_bdevs": 4, 00:11:23.080 "num_base_bdevs_discovered": 4, 00:11:23.080 "num_base_bdevs_operational": 4, 00:11:23.080 "base_bdevs_list": [ 00:11:23.080 { 00:11:23.080 "name": "BaseBdev1", 00:11:23.080 "uuid": "7b7b1c55-ddda-4d69-800a-05d6dcb59e1c", 00:11:23.080 "is_configured": true, 00:11:23.080 "data_offset": 0, 00:11:23.080 "data_size": 65536 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "name": "BaseBdev2", 00:11:23.080 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:23.080 "is_configured": true, 00:11:23.080 "data_offset": 0, 00:11:23.080 "data_size": 65536 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "name": "BaseBdev3", 00:11:23.080 "uuid": "b2cd839d-6e5d-40b1-af54-9032919070c6", 00:11:23.080 "is_configured": true, 00:11:23.080 "data_offset": 0, 00:11:23.080 "data_size": 65536 00:11:23.080 }, 00:11:23.080 { 00:11:23.080 "name": "BaseBdev4", 00:11:23.080 "uuid": "47357e1c-e9bb-4709-b788-576065a2b78a", 00:11:23.080 "is_configured": true, 00:11:23.080 "data_offset": 0, 00:11:23.080 "data_size": 65536 00:11:23.080 } 00:11:23.080 ] 00:11:23.080 } 00:11:23.080 } 00:11:23.080 }' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.080 BaseBdev2 00:11:23.080 BaseBdev3 00:11:23.080 BaseBdev4' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.080 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.081 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.340 19:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.340 [2024-12-12 19:40:05.960210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.340 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.341 "name": "Existed_Raid", 00:11:23.341 "uuid": "d90a7ce5-9a33-4c2e-a8ee-30c0660d25d4", 00:11:23.341 "strip_size_kb": 0, 00:11:23.341 "state": "online", 00:11:23.341 "raid_level": "raid1", 00:11:23.341 "superblock": false, 00:11:23.341 "num_base_bdevs": 4, 00:11:23.341 "num_base_bdevs_discovered": 3, 00:11:23.341 "num_base_bdevs_operational": 3, 00:11:23.341 "base_bdevs_list": [ 00:11:23.341 { 00:11:23.341 "name": null, 00:11:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.341 "is_configured": false, 00:11:23.341 "data_offset": 0, 00:11:23.341 "data_size": 65536 00:11:23.341 }, 00:11:23.341 { 00:11:23.341 "name": "BaseBdev2", 00:11:23.341 "uuid": "b69b3926-865b-4a63-bac8-9b2ab5f2a652", 00:11:23.341 "is_configured": true, 00:11:23.341 "data_offset": 0, 00:11:23.341 "data_size": 65536 00:11:23.341 }, 00:11:23.341 { 00:11:23.341 "name": "BaseBdev3", 00:11:23.341 "uuid": "b2cd839d-6e5d-40b1-af54-9032919070c6", 00:11:23.341 "is_configured": true, 00:11:23.341 "data_offset": 0, 00:11:23.341 "data_size": 65536 00:11:23.341 }, 00:11:23.341 { 00:11:23.341 "name": "BaseBdev4", 00:11:23.341 "uuid": "47357e1c-e9bb-4709-b788-576065a2b78a", 00:11:23.341 "is_configured": true, 00:11:23.341 "data_offset": 0, 00:11:23.341 "data_size": 65536 00:11:23.341 } 00:11:23.341 ] 00:11:23.341 }' 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.341 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.909 [2024-12-12 19:40:06.596294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.909 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.169 [2024-12-12 19:40:06.754036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.169 19:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.169 [2024-12-12 19:40:06.912153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:24.169 [2024-12-12 19:40:06.912352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.429 [2024-12-12 19:40:07.015494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.429 [2024-12-12 19:40:07.015681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.429 [2024-12-12 19:40:07.015727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 BaseBdev2 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 [ 00:11:24.429 { 00:11:24.429 "name": "BaseBdev2", 00:11:24.429 "aliases": [ 00:11:24.429 "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad" 00:11:24.429 ], 00:11:24.429 "product_name": "Malloc disk", 00:11:24.429 "block_size": 512, 00:11:24.429 "num_blocks": 65536, 00:11:24.429 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:24.429 "assigned_rate_limits": { 00:11:24.429 "rw_ios_per_sec": 0, 00:11:24.429 "rw_mbytes_per_sec": 0, 00:11:24.429 "r_mbytes_per_sec": 0, 00:11:24.429 "w_mbytes_per_sec": 0 00:11:24.429 }, 00:11:24.429 "claimed": false, 00:11:24.429 "zoned": false, 00:11:24.429 "supported_io_types": { 00:11:24.429 "read": true, 00:11:24.429 "write": true, 00:11:24.429 "unmap": true, 00:11:24.429 "flush": true, 00:11:24.429 "reset": true, 00:11:24.429 "nvme_admin": false, 00:11:24.429 "nvme_io": false, 00:11:24.429 "nvme_io_md": false, 00:11:24.429 "write_zeroes": true, 00:11:24.429 "zcopy": true, 00:11:24.429 "get_zone_info": false, 00:11:24.429 "zone_management": false, 00:11:24.429 "zone_append": false, 00:11:24.429 "compare": false, 00:11:24.429 "compare_and_write": false, 00:11:24.429 "abort": true, 00:11:24.429 "seek_hole": false, 00:11:24.429 "seek_data": false, 00:11:24.429 "copy": true, 00:11:24.429 "nvme_iov_md": false 00:11:24.429 }, 00:11:24.429 "memory_domains": [ 00:11:24.429 { 00:11:24.429 "dma_device_id": "system", 00:11:24.429 "dma_device_type": 1 00:11:24.429 }, 00:11:24.429 { 00:11:24.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.429 "dma_device_type": 2 00:11:24.429 } 00:11:24.429 ], 00:11:24.429 "driver_specific": {} 00:11:24.429 } 00:11:24.429 ] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 BaseBdev3 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.429 [ 00:11:24.429 { 00:11:24.429 "name": "BaseBdev3", 00:11:24.429 "aliases": [ 00:11:24.429 "00b99bab-f939-4c0e-85fc-19ef74437bcf" 00:11:24.429 ], 00:11:24.429 "product_name": "Malloc disk", 00:11:24.429 "block_size": 512, 00:11:24.429 "num_blocks": 65536, 00:11:24.429 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:24.429 "assigned_rate_limits": { 00:11:24.429 "rw_ios_per_sec": 0, 00:11:24.429 "rw_mbytes_per_sec": 0, 00:11:24.429 "r_mbytes_per_sec": 0, 00:11:24.429 "w_mbytes_per_sec": 0 00:11:24.429 }, 00:11:24.429 "claimed": false, 00:11:24.429 "zoned": false, 00:11:24.429 "supported_io_types": { 00:11:24.429 "read": true, 00:11:24.429 "write": true, 00:11:24.429 "unmap": true, 00:11:24.429 "flush": true, 00:11:24.429 "reset": true, 00:11:24.429 "nvme_admin": false, 00:11:24.429 "nvme_io": false, 00:11:24.429 "nvme_io_md": false, 00:11:24.429 "write_zeroes": true, 00:11:24.429 "zcopy": true, 00:11:24.429 "get_zone_info": false, 00:11:24.429 "zone_management": false, 00:11:24.429 "zone_append": false, 00:11:24.429 "compare": false, 00:11:24.429 "compare_and_write": false, 00:11:24.429 "abort": true, 00:11:24.429 "seek_hole": false, 00:11:24.429 "seek_data": false, 00:11:24.429 "copy": true, 00:11:24.429 "nvme_iov_md": false 00:11:24.429 }, 00:11:24.429 "memory_domains": [ 00:11:24.429 { 00:11:24.429 "dma_device_id": "system", 00:11:24.429 "dma_device_type": 1 00:11:24.429 }, 00:11:24.429 { 00:11:24.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.429 "dma_device_type": 2 00:11:24.429 } 00:11:24.429 ], 00:11:24.429 "driver_specific": {} 00:11:24.429 } 00:11:24.429 ] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.429 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 BaseBdev4 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.688 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 [ 00:11:24.688 { 00:11:24.688 "name": "BaseBdev4", 00:11:24.688 "aliases": [ 00:11:24.688 "7bfd5d35-b26b-4502-963b-6a5f4150d077" 00:11:24.688 ], 00:11:24.688 "product_name": "Malloc disk", 00:11:24.688 "block_size": 512, 00:11:24.688 "num_blocks": 65536, 00:11:24.688 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:24.688 "assigned_rate_limits": { 00:11:24.688 "rw_ios_per_sec": 0, 00:11:24.688 "rw_mbytes_per_sec": 0, 00:11:24.688 "r_mbytes_per_sec": 0, 00:11:24.688 "w_mbytes_per_sec": 0 00:11:24.688 }, 00:11:24.688 "claimed": false, 00:11:24.688 "zoned": false, 00:11:24.688 "supported_io_types": { 00:11:24.688 "read": true, 00:11:24.688 "write": true, 00:11:24.688 "unmap": true, 00:11:24.688 "flush": true, 00:11:24.688 "reset": true, 00:11:24.688 "nvme_admin": false, 00:11:24.688 "nvme_io": false, 00:11:24.688 "nvme_io_md": false, 00:11:24.688 "write_zeroes": true, 00:11:24.688 "zcopy": true, 00:11:24.688 "get_zone_info": false, 00:11:24.688 "zone_management": false, 00:11:24.688 "zone_append": false, 00:11:24.688 "compare": false, 00:11:24.688 "compare_and_write": false, 00:11:24.688 "abort": true, 00:11:24.689 "seek_hole": false, 00:11:24.689 "seek_data": false, 00:11:24.689 "copy": true, 00:11:24.689 "nvme_iov_md": false 00:11:24.689 }, 00:11:24.689 "memory_domains": [ 00:11:24.689 { 00:11:24.689 "dma_device_id": "system", 00:11:24.689 "dma_device_type": 1 00:11:24.689 }, 00:11:24.689 { 00:11:24.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.689 "dma_device_type": 2 00:11:24.689 } 00:11:24.689 ], 00:11:24.689 "driver_specific": {} 00:11:24.689 } 00:11:24.689 ] 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.689 [2024-12-12 19:40:07.326973] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.689 [2024-12-12 19:40:07.327041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.689 [2024-12-12 19:40:07.327065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.689 [2024-12-12 19:40:07.329154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.689 [2024-12-12 19:40:07.329303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.689 "name": "Existed_Raid", 00:11:24.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.689 "strip_size_kb": 0, 00:11:24.689 "state": "configuring", 00:11:24.689 "raid_level": "raid1", 00:11:24.689 "superblock": false, 00:11:24.689 "num_base_bdevs": 4, 00:11:24.689 "num_base_bdevs_discovered": 3, 00:11:24.689 "num_base_bdevs_operational": 4, 00:11:24.689 "base_bdevs_list": [ 00:11:24.689 { 00:11:24.689 "name": "BaseBdev1", 00:11:24.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.689 "is_configured": false, 00:11:24.689 "data_offset": 0, 00:11:24.689 "data_size": 0 00:11:24.689 }, 00:11:24.689 { 00:11:24.689 "name": "BaseBdev2", 00:11:24.689 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:24.689 "is_configured": true, 00:11:24.689 "data_offset": 0, 00:11:24.689 "data_size": 65536 00:11:24.689 }, 00:11:24.689 { 00:11:24.689 "name": "BaseBdev3", 00:11:24.689 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:24.689 "is_configured": true, 00:11:24.689 "data_offset": 0, 00:11:24.689 "data_size": 65536 00:11:24.689 }, 00:11:24.689 { 00:11:24.689 "name": "BaseBdev4", 00:11:24.689 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:24.689 "is_configured": true, 00:11:24.689 "data_offset": 0, 00:11:24.689 "data_size": 65536 00:11:24.689 } 00:11:24.689 ] 00:11:24.689 }' 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.689 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.257 [2024-12-12 19:40:07.794286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.257 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.257 "name": "Existed_Raid", 00:11:25.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.257 "strip_size_kb": 0, 00:11:25.257 "state": "configuring", 00:11:25.257 "raid_level": "raid1", 00:11:25.257 "superblock": false, 00:11:25.257 "num_base_bdevs": 4, 00:11:25.257 "num_base_bdevs_discovered": 2, 00:11:25.257 "num_base_bdevs_operational": 4, 00:11:25.257 "base_bdevs_list": [ 00:11:25.257 { 00:11:25.257 "name": "BaseBdev1", 00:11:25.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.257 "is_configured": false, 00:11:25.257 "data_offset": 0, 00:11:25.257 "data_size": 0 00:11:25.257 }, 00:11:25.257 { 00:11:25.257 "name": null, 00:11:25.257 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:25.257 "is_configured": false, 00:11:25.257 "data_offset": 0, 00:11:25.257 "data_size": 65536 00:11:25.257 }, 00:11:25.257 { 00:11:25.257 "name": "BaseBdev3", 00:11:25.257 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:25.257 "is_configured": true, 00:11:25.257 "data_offset": 0, 00:11:25.258 "data_size": 65536 00:11:25.258 }, 00:11:25.258 { 00:11:25.258 "name": "BaseBdev4", 00:11:25.258 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:25.258 "is_configured": true, 00:11:25.258 "data_offset": 0, 00:11:25.258 "data_size": 65536 00:11:25.258 } 00:11:25.258 ] 00:11:25.258 }' 00:11:25.258 19:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.258 19:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.519 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.519 [2024-12-12 19:40:08.291721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.519 BaseBdev1 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.520 [ 00:11:25.520 { 00:11:25.520 "name": "BaseBdev1", 00:11:25.520 "aliases": [ 00:11:25.520 "4c8e3864-f025-4eec-ab8d-9e0eea77df54" 00:11:25.520 ], 00:11:25.520 "product_name": "Malloc disk", 00:11:25.520 "block_size": 512, 00:11:25.520 "num_blocks": 65536, 00:11:25.520 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:25.520 "assigned_rate_limits": { 00:11:25.520 "rw_ios_per_sec": 0, 00:11:25.520 "rw_mbytes_per_sec": 0, 00:11:25.520 "r_mbytes_per_sec": 0, 00:11:25.520 "w_mbytes_per_sec": 0 00:11:25.520 }, 00:11:25.520 "claimed": true, 00:11:25.520 "claim_type": "exclusive_write", 00:11:25.520 "zoned": false, 00:11:25.520 "supported_io_types": { 00:11:25.520 "read": true, 00:11:25.520 "write": true, 00:11:25.520 "unmap": true, 00:11:25.520 "flush": true, 00:11:25.520 "reset": true, 00:11:25.520 "nvme_admin": false, 00:11:25.520 "nvme_io": false, 00:11:25.520 "nvme_io_md": false, 00:11:25.520 "write_zeroes": true, 00:11:25.520 "zcopy": true, 00:11:25.520 "get_zone_info": false, 00:11:25.520 "zone_management": false, 00:11:25.520 "zone_append": false, 00:11:25.520 "compare": false, 00:11:25.520 "compare_and_write": false, 00:11:25.520 "abort": true, 00:11:25.520 "seek_hole": false, 00:11:25.520 "seek_data": false, 00:11:25.520 "copy": true, 00:11:25.520 "nvme_iov_md": false 00:11:25.520 }, 00:11:25.520 "memory_domains": [ 00:11:25.520 { 00:11:25.520 "dma_device_id": "system", 00:11:25.520 "dma_device_type": 1 00:11:25.520 }, 00:11:25.520 { 00:11:25.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.520 "dma_device_type": 2 00:11:25.520 } 00:11:25.520 ], 00:11:25.520 "driver_specific": {} 00:11:25.520 } 00:11:25.520 ] 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.520 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.779 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.779 "name": "Existed_Raid", 00:11:25.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.779 "strip_size_kb": 0, 00:11:25.779 "state": "configuring", 00:11:25.779 "raid_level": "raid1", 00:11:25.779 "superblock": false, 00:11:25.779 "num_base_bdevs": 4, 00:11:25.779 "num_base_bdevs_discovered": 3, 00:11:25.779 "num_base_bdevs_operational": 4, 00:11:25.779 "base_bdevs_list": [ 00:11:25.779 { 00:11:25.779 "name": "BaseBdev1", 00:11:25.779 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:25.779 "is_configured": true, 00:11:25.779 "data_offset": 0, 00:11:25.779 "data_size": 65536 00:11:25.779 }, 00:11:25.779 { 00:11:25.779 "name": null, 00:11:25.779 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:25.779 "is_configured": false, 00:11:25.779 "data_offset": 0, 00:11:25.779 "data_size": 65536 00:11:25.779 }, 00:11:25.779 { 00:11:25.779 "name": "BaseBdev3", 00:11:25.779 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:25.779 "is_configured": true, 00:11:25.779 "data_offset": 0, 00:11:25.779 "data_size": 65536 00:11:25.779 }, 00:11:25.779 { 00:11:25.779 "name": "BaseBdev4", 00:11:25.779 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:25.779 "is_configured": true, 00:11:25.779 "data_offset": 0, 00:11:25.779 "data_size": 65536 00:11:25.779 } 00:11:25.779 ] 00:11:25.779 }' 00:11:25.779 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.779 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.038 [2024-12-12 19:40:08.854954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.038 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.297 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.297 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.297 "name": "Existed_Raid", 00:11:26.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.297 "strip_size_kb": 0, 00:11:26.297 "state": "configuring", 00:11:26.297 "raid_level": "raid1", 00:11:26.297 "superblock": false, 00:11:26.297 "num_base_bdevs": 4, 00:11:26.297 "num_base_bdevs_discovered": 2, 00:11:26.297 "num_base_bdevs_operational": 4, 00:11:26.297 "base_bdevs_list": [ 00:11:26.297 { 00:11:26.297 "name": "BaseBdev1", 00:11:26.297 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:26.297 "is_configured": true, 00:11:26.297 "data_offset": 0, 00:11:26.297 "data_size": 65536 00:11:26.297 }, 00:11:26.297 { 00:11:26.297 "name": null, 00:11:26.297 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:26.297 "is_configured": false, 00:11:26.297 "data_offset": 0, 00:11:26.297 "data_size": 65536 00:11:26.297 }, 00:11:26.297 { 00:11:26.297 "name": null, 00:11:26.297 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:26.297 "is_configured": false, 00:11:26.297 "data_offset": 0, 00:11:26.297 "data_size": 65536 00:11:26.297 }, 00:11:26.297 { 00:11:26.297 "name": "BaseBdev4", 00:11:26.297 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:26.297 "is_configured": true, 00:11:26.297 "data_offset": 0, 00:11:26.297 "data_size": 65536 00:11:26.297 } 00:11:26.297 ] 00:11:26.297 }' 00:11:26.297 19:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.297 19:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 [2024-12-12 19:40:09.318709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.557 "name": "Existed_Raid", 00:11:26.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.557 "strip_size_kb": 0, 00:11:26.557 "state": "configuring", 00:11:26.557 "raid_level": "raid1", 00:11:26.557 "superblock": false, 00:11:26.557 "num_base_bdevs": 4, 00:11:26.557 "num_base_bdevs_discovered": 3, 00:11:26.557 "num_base_bdevs_operational": 4, 00:11:26.557 "base_bdevs_list": [ 00:11:26.557 { 00:11:26.557 "name": "BaseBdev1", 00:11:26.557 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:26.557 "is_configured": true, 00:11:26.557 "data_offset": 0, 00:11:26.557 "data_size": 65536 00:11:26.557 }, 00:11:26.557 { 00:11:26.557 "name": null, 00:11:26.557 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:26.557 "is_configured": false, 00:11:26.557 "data_offset": 0, 00:11:26.557 "data_size": 65536 00:11:26.557 }, 00:11:26.557 { 00:11:26.557 "name": "BaseBdev3", 00:11:26.557 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:26.557 "is_configured": true, 00:11:26.557 "data_offset": 0, 00:11:26.557 "data_size": 65536 00:11:26.557 }, 00:11:26.557 { 00:11:26.557 "name": "BaseBdev4", 00:11:26.557 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:26.557 "is_configured": true, 00:11:26.557 "data_offset": 0, 00:11:26.557 "data_size": 65536 00:11:26.557 } 00:11:26.557 ] 00:11:26.557 }' 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.557 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.127 [2024-12-12 19:40:09.813954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.127 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.387 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.387 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.387 "name": "Existed_Raid", 00:11:27.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.387 "strip_size_kb": 0, 00:11:27.387 "state": "configuring", 00:11:27.387 "raid_level": "raid1", 00:11:27.387 "superblock": false, 00:11:27.387 "num_base_bdevs": 4, 00:11:27.387 "num_base_bdevs_discovered": 2, 00:11:27.387 "num_base_bdevs_operational": 4, 00:11:27.387 "base_bdevs_list": [ 00:11:27.387 { 00:11:27.387 "name": null, 00:11:27.387 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:27.387 "is_configured": false, 00:11:27.387 "data_offset": 0, 00:11:27.387 "data_size": 65536 00:11:27.387 }, 00:11:27.387 { 00:11:27.387 "name": null, 00:11:27.387 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:27.387 "is_configured": false, 00:11:27.387 "data_offset": 0, 00:11:27.387 "data_size": 65536 00:11:27.387 }, 00:11:27.387 { 00:11:27.387 "name": "BaseBdev3", 00:11:27.387 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:27.387 "is_configured": true, 00:11:27.387 "data_offset": 0, 00:11:27.387 "data_size": 65536 00:11:27.387 }, 00:11:27.387 { 00:11:27.387 "name": "BaseBdev4", 00:11:27.387 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:27.387 "is_configured": true, 00:11:27.387 "data_offset": 0, 00:11:27.387 "data_size": 65536 00:11:27.387 } 00:11:27.387 ] 00:11:27.387 }' 00:11:27.387 19:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.387 19:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 [2024-12-12 19:40:10.472794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.647 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.905 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.905 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.905 "name": "Existed_Raid", 00:11:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.905 "strip_size_kb": 0, 00:11:27.905 "state": "configuring", 00:11:27.905 "raid_level": "raid1", 00:11:27.905 "superblock": false, 00:11:27.905 "num_base_bdevs": 4, 00:11:27.905 "num_base_bdevs_discovered": 3, 00:11:27.905 "num_base_bdevs_operational": 4, 00:11:27.905 "base_bdevs_list": [ 00:11:27.905 { 00:11:27.905 "name": null, 00:11:27.905 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:27.905 "is_configured": false, 00:11:27.905 "data_offset": 0, 00:11:27.905 "data_size": 65536 00:11:27.905 }, 00:11:27.906 { 00:11:27.906 "name": "BaseBdev2", 00:11:27.906 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:27.906 "is_configured": true, 00:11:27.906 "data_offset": 0, 00:11:27.906 "data_size": 65536 00:11:27.906 }, 00:11:27.906 { 00:11:27.906 "name": "BaseBdev3", 00:11:27.906 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:27.906 "is_configured": true, 00:11:27.906 "data_offset": 0, 00:11:27.906 "data_size": 65536 00:11:27.906 }, 00:11:27.906 { 00:11:27.906 "name": "BaseBdev4", 00:11:27.906 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:27.906 "is_configured": true, 00:11:27.906 "data_offset": 0, 00:11:27.906 "data_size": 65536 00:11:27.906 } 00:11:27.906 ] 00:11:27.906 }' 00:11:27.906 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.906 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c8e3864-f025-4eec-ab8d-9e0eea77df54 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.165 19:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.424 [2024-12-12 19:40:11.029950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.424 [2024-12-12 19:40:11.030103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.424 [2024-12-12 19:40:11.030125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:28.424 [2024-12-12 19:40:11.030509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:28.424 [2024-12-12 19:40:11.030755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.424 [2024-12-12 19:40:11.030769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.424 NewBaseBdev 00:11:28.424 [2024-12-12 19:40:11.031124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.424 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.424 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.424 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.425 [ 00:11:28.425 { 00:11:28.425 "name": "NewBaseBdev", 00:11:28.425 "aliases": [ 00:11:28.425 "4c8e3864-f025-4eec-ab8d-9e0eea77df54" 00:11:28.425 ], 00:11:28.425 "product_name": "Malloc disk", 00:11:28.425 "block_size": 512, 00:11:28.425 "num_blocks": 65536, 00:11:28.425 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:28.425 "assigned_rate_limits": { 00:11:28.425 "rw_ios_per_sec": 0, 00:11:28.425 "rw_mbytes_per_sec": 0, 00:11:28.425 "r_mbytes_per_sec": 0, 00:11:28.425 "w_mbytes_per_sec": 0 00:11:28.425 }, 00:11:28.425 "claimed": true, 00:11:28.425 "claim_type": "exclusive_write", 00:11:28.425 "zoned": false, 00:11:28.425 "supported_io_types": { 00:11:28.425 "read": true, 00:11:28.425 "write": true, 00:11:28.425 "unmap": true, 00:11:28.425 "flush": true, 00:11:28.425 "reset": true, 00:11:28.425 "nvme_admin": false, 00:11:28.425 "nvme_io": false, 00:11:28.425 "nvme_io_md": false, 00:11:28.425 "write_zeroes": true, 00:11:28.425 "zcopy": true, 00:11:28.425 "get_zone_info": false, 00:11:28.425 "zone_management": false, 00:11:28.425 "zone_append": false, 00:11:28.425 "compare": false, 00:11:28.425 "compare_and_write": false, 00:11:28.425 "abort": true, 00:11:28.425 "seek_hole": false, 00:11:28.425 "seek_data": false, 00:11:28.425 "copy": true, 00:11:28.425 "nvme_iov_md": false 00:11:28.425 }, 00:11:28.425 "memory_domains": [ 00:11:28.425 { 00:11:28.425 "dma_device_id": "system", 00:11:28.425 "dma_device_type": 1 00:11:28.425 }, 00:11:28.425 { 00:11:28.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.425 "dma_device_type": 2 00:11:28.425 } 00:11:28.425 ], 00:11:28.425 "driver_specific": {} 00:11:28.425 } 00:11:28.425 ] 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.425 "name": "Existed_Raid", 00:11:28.425 "uuid": "9a618206-9f17-438f-8c7a-dd16c0316780", 00:11:28.425 "strip_size_kb": 0, 00:11:28.425 "state": "online", 00:11:28.425 "raid_level": "raid1", 00:11:28.425 "superblock": false, 00:11:28.425 "num_base_bdevs": 4, 00:11:28.425 "num_base_bdevs_discovered": 4, 00:11:28.425 "num_base_bdevs_operational": 4, 00:11:28.425 "base_bdevs_list": [ 00:11:28.425 { 00:11:28.425 "name": "NewBaseBdev", 00:11:28.425 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:28.425 "is_configured": true, 00:11:28.425 "data_offset": 0, 00:11:28.425 "data_size": 65536 00:11:28.425 }, 00:11:28.425 { 00:11:28.425 "name": "BaseBdev2", 00:11:28.425 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:28.425 "is_configured": true, 00:11:28.425 "data_offset": 0, 00:11:28.425 "data_size": 65536 00:11:28.425 }, 00:11:28.425 { 00:11:28.425 "name": "BaseBdev3", 00:11:28.425 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:28.425 "is_configured": true, 00:11:28.425 "data_offset": 0, 00:11:28.425 "data_size": 65536 00:11:28.425 }, 00:11:28.425 { 00:11:28.425 "name": "BaseBdev4", 00:11:28.425 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:28.425 "is_configured": true, 00:11:28.425 "data_offset": 0, 00:11:28.425 "data_size": 65536 00:11:28.425 } 00:11:28.425 ] 00:11:28.425 }' 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.425 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.684 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.943 [2024-12-12 19:40:11.529840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.943 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.943 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.943 "name": "Existed_Raid", 00:11:28.943 "aliases": [ 00:11:28.943 "9a618206-9f17-438f-8c7a-dd16c0316780" 00:11:28.943 ], 00:11:28.943 "product_name": "Raid Volume", 00:11:28.943 "block_size": 512, 00:11:28.943 "num_blocks": 65536, 00:11:28.943 "uuid": "9a618206-9f17-438f-8c7a-dd16c0316780", 00:11:28.943 "assigned_rate_limits": { 00:11:28.943 "rw_ios_per_sec": 0, 00:11:28.944 "rw_mbytes_per_sec": 0, 00:11:28.944 "r_mbytes_per_sec": 0, 00:11:28.944 "w_mbytes_per_sec": 0 00:11:28.944 }, 00:11:28.944 "claimed": false, 00:11:28.944 "zoned": false, 00:11:28.944 "supported_io_types": { 00:11:28.944 "read": true, 00:11:28.944 "write": true, 00:11:28.944 "unmap": false, 00:11:28.944 "flush": false, 00:11:28.944 "reset": true, 00:11:28.944 "nvme_admin": false, 00:11:28.944 "nvme_io": false, 00:11:28.944 "nvme_io_md": false, 00:11:28.944 "write_zeroes": true, 00:11:28.944 "zcopy": false, 00:11:28.944 "get_zone_info": false, 00:11:28.944 "zone_management": false, 00:11:28.944 "zone_append": false, 00:11:28.944 "compare": false, 00:11:28.944 "compare_and_write": false, 00:11:28.944 "abort": false, 00:11:28.944 "seek_hole": false, 00:11:28.944 "seek_data": false, 00:11:28.944 "copy": false, 00:11:28.944 "nvme_iov_md": false 00:11:28.944 }, 00:11:28.944 "memory_domains": [ 00:11:28.944 { 00:11:28.944 "dma_device_id": "system", 00:11:28.944 "dma_device_type": 1 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.944 "dma_device_type": 2 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "system", 00:11:28.944 "dma_device_type": 1 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.944 "dma_device_type": 2 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "system", 00:11:28.944 "dma_device_type": 1 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.944 "dma_device_type": 2 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "system", 00:11:28.944 "dma_device_type": 1 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.944 "dma_device_type": 2 00:11:28.944 } 00:11:28.944 ], 00:11:28.944 "driver_specific": { 00:11:28.944 "raid": { 00:11:28.944 "uuid": "9a618206-9f17-438f-8c7a-dd16c0316780", 00:11:28.944 "strip_size_kb": 0, 00:11:28.944 "state": "online", 00:11:28.944 "raid_level": "raid1", 00:11:28.944 "superblock": false, 00:11:28.944 "num_base_bdevs": 4, 00:11:28.944 "num_base_bdevs_discovered": 4, 00:11:28.944 "num_base_bdevs_operational": 4, 00:11:28.944 "base_bdevs_list": [ 00:11:28.944 { 00:11:28.944 "name": "NewBaseBdev", 00:11:28.944 "uuid": "4c8e3864-f025-4eec-ab8d-9e0eea77df54", 00:11:28.944 "is_configured": true, 00:11:28.944 "data_offset": 0, 00:11:28.944 "data_size": 65536 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "name": "BaseBdev2", 00:11:28.944 "uuid": "2f3a914e-1fb8-46de-b4c1-1ba7dcb542ad", 00:11:28.944 "is_configured": true, 00:11:28.944 "data_offset": 0, 00:11:28.944 "data_size": 65536 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "name": "BaseBdev3", 00:11:28.944 "uuid": "00b99bab-f939-4c0e-85fc-19ef74437bcf", 00:11:28.944 "is_configured": true, 00:11:28.944 "data_offset": 0, 00:11:28.944 "data_size": 65536 00:11:28.944 }, 00:11:28.944 { 00:11:28.944 "name": "BaseBdev4", 00:11:28.944 "uuid": "7bfd5d35-b26b-4502-963b-6a5f4150d077", 00:11:28.944 "is_configured": true, 00:11:28.944 "data_offset": 0, 00:11:28.944 "data_size": 65536 00:11:28.944 } 00:11:28.944 ] 00:11:28.944 } 00:11:28.944 } 00:11:28.944 }' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.944 BaseBdev2 00:11:28.944 BaseBdev3 00:11:28.944 BaseBdev4' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.944 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.204 [2024-12-12 19:40:11.884778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.204 [2024-12-12 19:40:11.884885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.204 [2024-12-12 19:40:11.885031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.204 [2024-12-12 19:40:11.885470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.204 [2024-12-12 19:40:11.885558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74881 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74881 ']' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74881 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74881 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.204 killing process with pid 74881 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74881' 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74881 00:11:29.204 19:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74881 00:11:29.204 [2024-12-12 19:40:11.931347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.790 [2024-12-12 19:40:12.428705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:31.181 00:11:31.181 real 0m12.219s 00:11:31.181 user 0m18.883s 00:11:31.181 sys 0m2.288s 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.181 ************************************ 00:11:31.181 END TEST raid_state_function_test 00:11:31.181 ************************************ 00:11:31.181 19:40:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:31.181 19:40:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:31.181 19:40:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.181 19:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.181 ************************************ 00:11:31.181 START TEST raid_state_function_test_sb 00:11:31.181 ************************************ 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.181 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75559 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75559' 00:11:31.182 Process raid pid: 75559 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75559 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75559 ']' 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.182 19:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.440 [2024-12-12 19:40:14.088302] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:31.440 [2024-12-12 19:40:14.088549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.440 [2024-12-12 19:40:14.278062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.698 [2024-12-12 19:40:14.439266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.957 [2024-12-12 19:40:14.727812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.958 [2024-12-12 19:40:14.727991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.217 19:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.217 19:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:32.217 19:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.217 19:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.217 19:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.217 [2024-12-12 19:40:14.995457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.217 [2024-12-12 19:40:14.995526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.217 [2024-12-12 19:40:14.995569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.217 [2024-12-12 19:40:14.995585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.217 [2024-12-12 19:40:14.995593] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.218 [2024-12-12 19:40:14.995606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.218 [2024-12-12 19:40:14.995613] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.218 [2024-12-12 19:40:14.995625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.218 19:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.218 19:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.218 "name": "Existed_Raid", 00:11:32.218 "uuid": "b8226e3e-6e80-4e02-a3cd-1672c4b63e38", 00:11:32.218 "strip_size_kb": 0, 00:11:32.218 "state": "configuring", 00:11:32.218 "raid_level": "raid1", 00:11:32.218 "superblock": true, 00:11:32.218 "num_base_bdevs": 4, 00:11:32.218 "num_base_bdevs_discovered": 0, 00:11:32.218 "num_base_bdevs_operational": 4, 00:11:32.218 "base_bdevs_list": [ 00:11:32.218 { 00:11:32.218 "name": "BaseBdev1", 00:11:32.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.218 "is_configured": false, 00:11:32.218 "data_offset": 0, 00:11:32.218 "data_size": 0 00:11:32.218 }, 00:11:32.218 { 00:11:32.218 "name": "BaseBdev2", 00:11:32.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.218 "is_configured": false, 00:11:32.218 "data_offset": 0, 00:11:32.218 "data_size": 0 00:11:32.218 }, 00:11:32.218 { 00:11:32.218 "name": "BaseBdev3", 00:11:32.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.218 "is_configured": false, 00:11:32.218 "data_offset": 0, 00:11:32.218 "data_size": 0 00:11:32.218 }, 00:11:32.218 { 00:11:32.218 "name": "BaseBdev4", 00:11:32.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.218 "is_configured": false, 00:11:32.218 "data_offset": 0, 00:11:32.218 "data_size": 0 00:11:32.218 } 00:11:32.218 ] 00:11:32.218 }' 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.218 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 [2024-12-12 19:40:15.442633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.790 [2024-12-12 19:40:15.442773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 [2024-12-12 19:40:15.454581] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.790 [2024-12-12 19:40:15.454681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.790 [2024-12-12 19:40:15.454711] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.790 [2024-12-12 19:40:15.454735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.790 [2024-12-12 19:40:15.454754] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.790 [2024-12-12 19:40:15.454778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.790 [2024-12-12 19:40:15.454796] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.790 [2024-12-12 19:40:15.454843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 [2024-12-12 19:40:15.512766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.790 BaseBdev1 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 [ 00:11:32.790 { 00:11:32.790 "name": "BaseBdev1", 00:11:32.790 "aliases": [ 00:11:32.790 "89024de1-cbda-4afd-a8c5-6a371dd707e2" 00:11:32.790 ], 00:11:32.790 "product_name": "Malloc disk", 00:11:32.790 "block_size": 512, 00:11:32.790 "num_blocks": 65536, 00:11:32.790 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:32.790 "assigned_rate_limits": { 00:11:32.790 "rw_ios_per_sec": 0, 00:11:32.790 "rw_mbytes_per_sec": 0, 00:11:32.790 "r_mbytes_per_sec": 0, 00:11:32.790 "w_mbytes_per_sec": 0 00:11:32.790 }, 00:11:32.790 "claimed": true, 00:11:32.790 "claim_type": "exclusive_write", 00:11:32.790 "zoned": false, 00:11:32.790 "supported_io_types": { 00:11:32.790 "read": true, 00:11:32.790 "write": true, 00:11:32.790 "unmap": true, 00:11:32.790 "flush": true, 00:11:32.790 "reset": true, 00:11:32.790 "nvme_admin": false, 00:11:32.790 "nvme_io": false, 00:11:32.790 "nvme_io_md": false, 00:11:32.790 "write_zeroes": true, 00:11:32.790 "zcopy": true, 00:11:32.790 "get_zone_info": false, 00:11:32.790 "zone_management": false, 00:11:32.790 "zone_append": false, 00:11:32.790 "compare": false, 00:11:32.790 "compare_and_write": false, 00:11:32.790 "abort": true, 00:11:32.790 "seek_hole": false, 00:11:32.790 "seek_data": false, 00:11:32.790 "copy": true, 00:11:32.790 "nvme_iov_md": false 00:11:32.790 }, 00:11:32.790 "memory_domains": [ 00:11:32.790 { 00:11:32.790 "dma_device_id": "system", 00:11:32.790 "dma_device_type": 1 00:11:32.790 }, 00:11:32.790 { 00:11:32.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.790 "dma_device_type": 2 00:11:32.790 } 00:11:32.790 ], 00:11:32.790 "driver_specific": {} 00:11:32.790 } 00:11:32.790 ] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.790 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.790 "name": "Existed_Raid", 00:11:32.790 "uuid": "4480627f-4a0e-4153-94a8-fddf5a41e7b9", 00:11:32.790 "strip_size_kb": 0, 00:11:32.790 "state": "configuring", 00:11:32.790 "raid_level": "raid1", 00:11:32.790 "superblock": true, 00:11:32.790 "num_base_bdevs": 4, 00:11:32.790 "num_base_bdevs_discovered": 1, 00:11:32.790 "num_base_bdevs_operational": 4, 00:11:32.790 "base_bdevs_list": [ 00:11:32.790 { 00:11:32.790 "name": "BaseBdev1", 00:11:32.790 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:32.790 "is_configured": true, 00:11:32.790 "data_offset": 2048, 00:11:32.790 "data_size": 63488 00:11:32.790 }, 00:11:32.790 { 00:11:32.790 "name": "BaseBdev2", 00:11:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.790 "is_configured": false, 00:11:32.790 "data_offset": 0, 00:11:32.790 "data_size": 0 00:11:32.790 }, 00:11:32.790 { 00:11:32.790 "name": "BaseBdev3", 00:11:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.790 "is_configured": false, 00:11:32.790 "data_offset": 0, 00:11:32.790 "data_size": 0 00:11:32.790 }, 00:11:32.790 { 00:11:32.790 "name": "BaseBdev4", 00:11:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.790 "is_configured": false, 00:11:32.790 "data_offset": 0, 00:11:32.790 "data_size": 0 00:11:32.790 } 00:11:32.790 ] 00:11:32.790 }' 00:11:32.791 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.791 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.361 [2024-12-12 19:40:15.972072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.361 [2024-12-12 19:40:15.972232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.361 [2024-12-12 19:40:15.984127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.361 [2024-12-12 19:40:15.986803] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.361 [2024-12-12 19:40:15.986898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.361 [2024-12-12 19:40:15.986937] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.361 [2024-12-12 19:40:15.986968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.361 [2024-12-12 19:40:15.987014] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.361 [2024-12-12 19:40:15.987059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.361 19:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.361 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.361 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.361 "name": "Existed_Raid", 00:11:33.361 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:33.361 "strip_size_kb": 0, 00:11:33.361 "state": "configuring", 00:11:33.361 "raid_level": "raid1", 00:11:33.361 "superblock": true, 00:11:33.361 "num_base_bdevs": 4, 00:11:33.361 "num_base_bdevs_discovered": 1, 00:11:33.361 "num_base_bdevs_operational": 4, 00:11:33.361 "base_bdevs_list": [ 00:11:33.361 { 00:11:33.361 "name": "BaseBdev1", 00:11:33.361 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:33.361 "is_configured": true, 00:11:33.361 "data_offset": 2048, 00:11:33.361 "data_size": 63488 00:11:33.361 }, 00:11:33.361 { 00:11:33.361 "name": "BaseBdev2", 00:11:33.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.361 "is_configured": false, 00:11:33.361 "data_offset": 0, 00:11:33.361 "data_size": 0 00:11:33.361 }, 00:11:33.361 { 00:11:33.361 "name": "BaseBdev3", 00:11:33.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.361 "is_configured": false, 00:11:33.361 "data_offset": 0, 00:11:33.361 "data_size": 0 00:11:33.361 }, 00:11:33.361 { 00:11:33.361 "name": "BaseBdev4", 00:11:33.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.361 "is_configured": false, 00:11:33.361 "data_offset": 0, 00:11:33.361 "data_size": 0 00:11:33.361 } 00:11:33.361 ] 00:11:33.361 }' 00:11:33.361 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.361 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.621 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.621 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.621 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.881 [2024-12-12 19:40:16.510309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.881 BaseBdev2 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.881 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.881 [ 00:11:33.881 { 00:11:33.881 "name": "BaseBdev2", 00:11:33.881 "aliases": [ 00:11:33.881 "fc53d6b3-4357-450c-ae5c-83ef9463bd5a" 00:11:33.881 ], 00:11:33.881 "product_name": "Malloc disk", 00:11:33.881 "block_size": 512, 00:11:33.881 "num_blocks": 65536, 00:11:33.881 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:33.881 "assigned_rate_limits": { 00:11:33.881 "rw_ios_per_sec": 0, 00:11:33.881 "rw_mbytes_per_sec": 0, 00:11:33.881 "r_mbytes_per_sec": 0, 00:11:33.881 "w_mbytes_per_sec": 0 00:11:33.881 }, 00:11:33.881 "claimed": true, 00:11:33.882 "claim_type": "exclusive_write", 00:11:33.882 "zoned": false, 00:11:33.882 "supported_io_types": { 00:11:33.882 "read": true, 00:11:33.882 "write": true, 00:11:33.882 "unmap": true, 00:11:33.882 "flush": true, 00:11:33.882 "reset": true, 00:11:33.882 "nvme_admin": false, 00:11:33.882 "nvme_io": false, 00:11:33.882 "nvme_io_md": false, 00:11:33.882 "write_zeroes": true, 00:11:33.882 "zcopy": true, 00:11:33.882 "get_zone_info": false, 00:11:33.882 "zone_management": false, 00:11:33.882 "zone_append": false, 00:11:33.882 "compare": false, 00:11:33.882 "compare_and_write": false, 00:11:33.882 "abort": true, 00:11:33.882 "seek_hole": false, 00:11:33.882 "seek_data": false, 00:11:33.882 "copy": true, 00:11:33.882 "nvme_iov_md": false 00:11:33.882 }, 00:11:33.882 "memory_domains": [ 00:11:33.882 { 00:11:33.882 "dma_device_id": "system", 00:11:33.882 "dma_device_type": 1 00:11:33.882 }, 00:11:33.882 { 00:11:33.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.882 "dma_device_type": 2 00:11:33.882 } 00:11:33.882 ], 00:11:33.882 "driver_specific": {} 00:11:33.882 } 00:11:33.882 ] 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.882 "name": "Existed_Raid", 00:11:33.882 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:33.882 "strip_size_kb": 0, 00:11:33.882 "state": "configuring", 00:11:33.882 "raid_level": "raid1", 00:11:33.882 "superblock": true, 00:11:33.882 "num_base_bdevs": 4, 00:11:33.882 "num_base_bdevs_discovered": 2, 00:11:33.882 "num_base_bdevs_operational": 4, 00:11:33.882 "base_bdevs_list": [ 00:11:33.882 { 00:11:33.882 "name": "BaseBdev1", 00:11:33.882 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:33.882 "is_configured": true, 00:11:33.882 "data_offset": 2048, 00:11:33.882 "data_size": 63488 00:11:33.882 }, 00:11:33.882 { 00:11:33.882 "name": "BaseBdev2", 00:11:33.882 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:33.882 "is_configured": true, 00:11:33.882 "data_offset": 2048, 00:11:33.882 "data_size": 63488 00:11:33.882 }, 00:11:33.882 { 00:11:33.882 "name": "BaseBdev3", 00:11:33.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.882 "is_configured": false, 00:11:33.882 "data_offset": 0, 00:11:33.882 "data_size": 0 00:11:33.882 }, 00:11:33.882 { 00:11:33.882 "name": "BaseBdev4", 00:11:33.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.882 "is_configured": false, 00:11:33.882 "data_offset": 0, 00:11:33.882 "data_size": 0 00:11:33.882 } 00:11:33.882 ] 00:11:33.882 }' 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.882 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.142 19:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.142 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.142 19:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.402 [2024-12-12 19:40:17.028251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.402 BaseBdev3 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.402 [ 00:11:34.402 { 00:11:34.402 "name": "BaseBdev3", 00:11:34.402 "aliases": [ 00:11:34.402 "deed5f45-bea8-4e23-902f-0497ab1f3712" 00:11:34.402 ], 00:11:34.402 "product_name": "Malloc disk", 00:11:34.402 "block_size": 512, 00:11:34.402 "num_blocks": 65536, 00:11:34.402 "uuid": "deed5f45-bea8-4e23-902f-0497ab1f3712", 00:11:34.402 "assigned_rate_limits": { 00:11:34.402 "rw_ios_per_sec": 0, 00:11:34.402 "rw_mbytes_per_sec": 0, 00:11:34.402 "r_mbytes_per_sec": 0, 00:11:34.402 "w_mbytes_per_sec": 0 00:11:34.402 }, 00:11:34.402 "claimed": true, 00:11:34.402 "claim_type": "exclusive_write", 00:11:34.402 "zoned": false, 00:11:34.402 "supported_io_types": { 00:11:34.402 "read": true, 00:11:34.402 "write": true, 00:11:34.402 "unmap": true, 00:11:34.402 "flush": true, 00:11:34.402 "reset": true, 00:11:34.402 "nvme_admin": false, 00:11:34.402 "nvme_io": false, 00:11:34.402 "nvme_io_md": false, 00:11:34.402 "write_zeroes": true, 00:11:34.402 "zcopy": true, 00:11:34.402 "get_zone_info": false, 00:11:34.402 "zone_management": false, 00:11:34.402 "zone_append": false, 00:11:34.402 "compare": false, 00:11:34.402 "compare_and_write": false, 00:11:34.402 "abort": true, 00:11:34.402 "seek_hole": false, 00:11:34.402 "seek_data": false, 00:11:34.402 "copy": true, 00:11:34.402 "nvme_iov_md": false 00:11:34.402 }, 00:11:34.402 "memory_domains": [ 00:11:34.402 { 00:11:34.402 "dma_device_id": "system", 00:11:34.402 "dma_device_type": 1 00:11:34.402 }, 00:11:34.402 { 00:11:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.402 "dma_device_type": 2 00:11:34.402 } 00:11:34.402 ], 00:11:34.402 "driver_specific": {} 00:11:34.402 } 00:11:34.402 ] 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.402 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.403 "name": "Existed_Raid", 00:11:34.403 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:34.403 "strip_size_kb": 0, 00:11:34.403 "state": "configuring", 00:11:34.403 "raid_level": "raid1", 00:11:34.403 "superblock": true, 00:11:34.403 "num_base_bdevs": 4, 00:11:34.403 "num_base_bdevs_discovered": 3, 00:11:34.403 "num_base_bdevs_operational": 4, 00:11:34.403 "base_bdevs_list": [ 00:11:34.403 { 00:11:34.403 "name": "BaseBdev1", 00:11:34.403 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:34.403 "is_configured": true, 00:11:34.403 "data_offset": 2048, 00:11:34.403 "data_size": 63488 00:11:34.403 }, 00:11:34.403 { 00:11:34.403 "name": "BaseBdev2", 00:11:34.403 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:34.403 "is_configured": true, 00:11:34.403 "data_offset": 2048, 00:11:34.403 "data_size": 63488 00:11:34.403 }, 00:11:34.403 { 00:11:34.403 "name": "BaseBdev3", 00:11:34.403 "uuid": "deed5f45-bea8-4e23-902f-0497ab1f3712", 00:11:34.403 "is_configured": true, 00:11:34.403 "data_offset": 2048, 00:11:34.403 "data_size": 63488 00:11:34.403 }, 00:11:34.403 { 00:11:34.403 "name": "BaseBdev4", 00:11:34.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.403 "is_configured": false, 00:11:34.403 "data_offset": 0, 00:11:34.403 "data_size": 0 00:11:34.403 } 00:11:34.403 ] 00:11:34.403 }' 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.403 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.662 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:34.662 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.662 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.921 [2024-12-12 19:40:17.537513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.921 [2024-12-12 19:40:17.538000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.921 [2024-12-12 19:40:17.538055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.921 [2024-12-12 19:40:17.538370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.921 [2024-12-12 19:40:17.538590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.921 BaseBdev4 00:11:34.921 [2024-12-12 19:40:17.538641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:34.921 [2024-12-12 19:40:17.538821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.921 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.921 [ 00:11:34.921 { 00:11:34.921 "name": "BaseBdev4", 00:11:34.921 "aliases": [ 00:11:34.921 "b3d94e9f-0151-441f-9f93-12a9bfbaf0b4" 00:11:34.921 ], 00:11:34.921 "product_name": "Malloc disk", 00:11:34.921 "block_size": 512, 00:11:34.921 "num_blocks": 65536, 00:11:34.921 "uuid": "b3d94e9f-0151-441f-9f93-12a9bfbaf0b4", 00:11:34.921 "assigned_rate_limits": { 00:11:34.921 "rw_ios_per_sec": 0, 00:11:34.921 "rw_mbytes_per_sec": 0, 00:11:34.921 "r_mbytes_per_sec": 0, 00:11:34.921 "w_mbytes_per_sec": 0 00:11:34.921 }, 00:11:34.921 "claimed": true, 00:11:34.921 "claim_type": "exclusive_write", 00:11:34.921 "zoned": false, 00:11:34.921 "supported_io_types": { 00:11:34.921 "read": true, 00:11:34.921 "write": true, 00:11:34.921 "unmap": true, 00:11:34.921 "flush": true, 00:11:34.921 "reset": true, 00:11:34.921 "nvme_admin": false, 00:11:34.921 "nvme_io": false, 00:11:34.921 "nvme_io_md": false, 00:11:34.921 "write_zeroes": true, 00:11:34.921 "zcopy": true, 00:11:34.921 "get_zone_info": false, 00:11:34.921 "zone_management": false, 00:11:34.922 "zone_append": false, 00:11:34.922 "compare": false, 00:11:34.922 "compare_and_write": false, 00:11:34.922 "abort": true, 00:11:34.922 "seek_hole": false, 00:11:34.922 "seek_data": false, 00:11:34.922 "copy": true, 00:11:34.922 "nvme_iov_md": false 00:11:34.922 }, 00:11:34.922 "memory_domains": [ 00:11:34.922 { 00:11:34.922 "dma_device_id": "system", 00:11:34.922 "dma_device_type": 1 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.922 "dma_device_type": 2 00:11:34.922 } 00:11:34.922 ], 00:11:34.922 "driver_specific": {} 00:11:34.922 } 00:11:34.922 ] 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.922 "name": "Existed_Raid", 00:11:34.922 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:34.922 "strip_size_kb": 0, 00:11:34.922 "state": "online", 00:11:34.922 "raid_level": "raid1", 00:11:34.922 "superblock": true, 00:11:34.922 "num_base_bdevs": 4, 00:11:34.922 "num_base_bdevs_discovered": 4, 00:11:34.922 "num_base_bdevs_operational": 4, 00:11:34.922 "base_bdevs_list": [ 00:11:34.922 { 00:11:34.922 "name": "BaseBdev1", 00:11:34.922 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "name": "BaseBdev2", 00:11:34.922 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "name": "BaseBdev3", 00:11:34.922 "uuid": "deed5f45-bea8-4e23-902f-0497ab1f3712", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 }, 00:11:34.922 { 00:11:34.922 "name": "BaseBdev4", 00:11:34.922 "uuid": "b3d94e9f-0151-441f-9f93-12a9bfbaf0b4", 00:11:34.922 "is_configured": true, 00:11:34.922 "data_offset": 2048, 00:11:34.922 "data_size": 63488 00:11:34.922 } 00:11:34.922 ] 00:11:34.922 }' 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.922 19:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.181 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 [2024-12-12 19:40:18.017075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.441 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.441 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.441 "name": "Existed_Raid", 00:11:35.441 "aliases": [ 00:11:35.441 "7fac129d-c0e3-4279-9242-d00634a0cf20" 00:11:35.441 ], 00:11:35.441 "product_name": "Raid Volume", 00:11:35.441 "block_size": 512, 00:11:35.441 "num_blocks": 63488, 00:11:35.441 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:35.441 "assigned_rate_limits": { 00:11:35.441 "rw_ios_per_sec": 0, 00:11:35.441 "rw_mbytes_per_sec": 0, 00:11:35.441 "r_mbytes_per_sec": 0, 00:11:35.441 "w_mbytes_per_sec": 0 00:11:35.441 }, 00:11:35.441 "claimed": false, 00:11:35.441 "zoned": false, 00:11:35.441 "supported_io_types": { 00:11:35.441 "read": true, 00:11:35.441 "write": true, 00:11:35.441 "unmap": false, 00:11:35.441 "flush": false, 00:11:35.441 "reset": true, 00:11:35.441 "nvme_admin": false, 00:11:35.441 "nvme_io": false, 00:11:35.441 "nvme_io_md": false, 00:11:35.441 "write_zeroes": true, 00:11:35.441 "zcopy": false, 00:11:35.441 "get_zone_info": false, 00:11:35.441 "zone_management": false, 00:11:35.441 "zone_append": false, 00:11:35.441 "compare": false, 00:11:35.441 "compare_and_write": false, 00:11:35.441 "abort": false, 00:11:35.441 "seek_hole": false, 00:11:35.441 "seek_data": false, 00:11:35.441 "copy": false, 00:11:35.441 "nvme_iov_md": false 00:11:35.441 }, 00:11:35.441 "memory_domains": [ 00:11:35.441 { 00:11:35.441 "dma_device_id": "system", 00:11:35.441 "dma_device_type": 1 00:11:35.441 }, 00:11:35.441 { 00:11:35.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.441 "dma_device_type": 2 00:11:35.441 }, 00:11:35.441 { 00:11:35.441 "dma_device_id": "system", 00:11:35.441 "dma_device_type": 1 00:11:35.441 }, 00:11:35.441 { 00:11:35.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.441 "dma_device_type": 2 00:11:35.441 }, 00:11:35.441 { 00:11:35.441 "dma_device_id": "system", 00:11:35.441 "dma_device_type": 1 00:11:35.441 }, 00:11:35.441 { 00:11:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.442 "dma_device_type": 2 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "dma_device_id": "system", 00:11:35.442 "dma_device_type": 1 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.442 "dma_device_type": 2 00:11:35.442 } 00:11:35.442 ], 00:11:35.442 "driver_specific": { 00:11:35.442 "raid": { 00:11:35.442 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:35.442 "strip_size_kb": 0, 00:11:35.442 "state": "online", 00:11:35.442 "raid_level": "raid1", 00:11:35.442 "superblock": true, 00:11:35.442 "num_base_bdevs": 4, 00:11:35.442 "num_base_bdevs_discovered": 4, 00:11:35.442 "num_base_bdevs_operational": 4, 00:11:35.442 "base_bdevs_list": [ 00:11:35.442 { 00:11:35.442 "name": "BaseBdev1", 00:11:35.442 "uuid": "89024de1-cbda-4afd-a8c5-6a371dd707e2", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 2048, 00:11:35.442 "data_size": 63488 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "name": "BaseBdev2", 00:11:35.442 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 2048, 00:11:35.442 "data_size": 63488 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "name": "BaseBdev3", 00:11:35.442 "uuid": "deed5f45-bea8-4e23-902f-0497ab1f3712", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 2048, 00:11:35.442 "data_size": 63488 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "name": "BaseBdev4", 00:11:35.442 "uuid": "b3d94e9f-0151-441f-9f93-12a9bfbaf0b4", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 2048, 00:11:35.442 "data_size": 63488 00:11:35.442 } 00:11:35.442 ] 00:11:35.442 } 00:11:35.442 } 00:11:35.442 }' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.442 BaseBdev2 00:11:35.442 BaseBdev3 00:11:35.442 BaseBdev4' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.442 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.701 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.702 [2024-12-12 19:40:18.356288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.702 "name": "Existed_Raid", 00:11:35.702 "uuid": "7fac129d-c0e3-4279-9242-d00634a0cf20", 00:11:35.702 "strip_size_kb": 0, 00:11:35.702 "state": "online", 00:11:35.702 "raid_level": "raid1", 00:11:35.702 "superblock": true, 00:11:35.702 "num_base_bdevs": 4, 00:11:35.702 "num_base_bdevs_discovered": 3, 00:11:35.702 "num_base_bdevs_operational": 3, 00:11:35.702 "base_bdevs_list": [ 00:11:35.702 { 00:11:35.702 "name": null, 00:11:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.702 "is_configured": false, 00:11:35.702 "data_offset": 0, 00:11:35.702 "data_size": 63488 00:11:35.702 }, 00:11:35.702 { 00:11:35.702 "name": "BaseBdev2", 00:11:35.702 "uuid": "fc53d6b3-4357-450c-ae5c-83ef9463bd5a", 00:11:35.702 "is_configured": true, 00:11:35.702 "data_offset": 2048, 00:11:35.702 "data_size": 63488 00:11:35.702 }, 00:11:35.702 { 00:11:35.702 "name": "BaseBdev3", 00:11:35.702 "uuid": "deed5f45-bea8-4e23-902f-0497ab1f3712", 00:11:35.702 "is_configured": true, 00:11:35.702 "data_offset": 2048, 00:11:35.702 "data_size": 63488 00:11:35.702 }, 00:11:35.702 { 00:11:35.702 "name": "BaseBdev4", 00:11:35.702 "uuid": "b3d94e9f-0151-441f-9f93-12a9bfbaf0b4", 00:11:35.702 "is_configured": true, 00:11:35.702 "data_offset": 2048, 00:11:35.702 "data_size": 63488 00:11:35.702 } 00:11:35.702 ] 00:11:35.702 }' 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.702 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.270 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.270 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.270 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.271 19:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.271 [2024-12-12 19:40:18.937243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.271 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.271 [2024-12-12 19:40:19.092499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.530 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.530 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.530 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.530 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.530 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 [2024-12-12 19:40:19.243366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:36.531 [2024-12-12 19:40:19.243531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.531 [2024-12-12 19:40:19.340562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.531 [2024-12-12 19:40:19.340722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.531 [2024-12-12 19:40:19.340769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.531 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.790 BaseBdev2 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.790 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.790 [ 00:11:36.790 { 00:11:36.790 "name": "BaseBdev2", 00:11:36.790 "aliases": [ 00:11:36.790 "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37" 00:11:36.790 ], 00:11:36.790 "product_name": "Malloc disk", 00:11:36.790 "block_size": 512, 00:11:36.790 "num_blocks": 65536, 00:11:36.790 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:36.790 "assigned_rate_limits": { 00:11:36.790 "rw_ios_per_sec": 0, 00:11:36.790 "rw_mbytes_per_sec": 0, 00:11:36.790 "r_mbytes_per_sec": 0, 00:11:36.790 "w_mbytes_per_sec": 0 00:11:36.790 }, 00:11:36.790 "claimed": false, 00:11:36.790 "zoned": false, 00:11:36.791 "supported_io_types": { 00:11:36.791 "read": true, 00:11:36.791 "write": true, 00:11:36.791 "unmap": true, 00:11:36.791 "flush": true, 00:11:36.791 "reset": true, 00:11:36.791 "nvme_admin": false, 00:11:36.791 "nvme_io": false, 00:11:36.791 "nvme_io_md": false, 00:11:36.791 "write_zeroes": true, 00:11:36.791 "zcopy": true, 00:11:36.791 "get_zone_info": false, 00:11:36.791 "zone_management": false, 00:11:36.791 "zone_append": false, 00:11:36.791 "compare": false, 00:11:36.791 "compare_and_write": false, 00:11:36.791 "abort": true, 00:11:36.791 "seek_hole": false, 00:11:36.791 "seek_data": false, 00:11:36.791 "copy": true, 00:11:36.791 "nvme_iov_md": false 00:11:36.791 }, 00:11:36.791 "memory_domains": [ 00:11:36.791 { 00:11:36.791 "dma_device_id": "system", 00:11:36.791 "dma_device_type": 1 00:11:36.791 }, 00:11:36.791 { 00:11:36.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.791 "dma_device_type": 2 00:11:36.791 } 00:11:36.791 ], 00:11:36.791 "driver_specific": {} 00:11:36.791 } 00:11:36.791 ] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 BaseBdev3 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 [ 00:11:36.791 { 00:11:36.791 "name": "BaseBdev3", 00:11:36.791 "aliases": [ 00:11:36.791 "316829ac-668a-4432-a8aa-e0b2ba646bb4" 00:11:36.791 ], 00:11:36.791 "product_name": "Malloc disk", 00:11:36.791 "block_size": 512, 00:11:36.791 "num_blocks": 65536, 00:11:36.791 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:36.791 "assigned_rate_limits": { 00:11:36.791 "rw_ios_per_sec": 0, 00:11:36.791 "rw_mbytes_per_sec": 0, 00:11:36.791 "r_mbytes_per_sec": 0, 00:11:36.791 "w_mbytes_per_sec": 0 00:11:36.791 }, 00:11:36.791 "claimed": false, 00:11:36.791 "zoned": false, 00:11:36.791 "supported_io_types": { 00:11:36.791 "read": true, 00:11:36.791 "write": true, 00:11:36.791 "unmap": true, 00:11:36.791 "flush": true, 00:11:36.791 "reset": true, 00:11:36.791 "nvme_admin": false, 00:11:36.791 "nvme_io": false, 00:11:36.791 "nvme_io_md": false, 00:11:36.791 "write_zeroes": true, 00:11:36.791 "zcopy": true, 00:11:36.791 "get_zone_info": false, 00:11:36.791 "zone_management": false, 00:11:36.791 "zone_append": false, 00:11:36.791 "compare": false, 00:11:36.791 "compare_and_write": false, 00:11:36.791 "abort": true, 00:11:36.791 "seek_hole": false, 00:11:36.791 "seek_data": false, 00:11:36.791 "copy": true, 00:11:36.791 "nvme_iov_md": false 00:11:36.791 }, 00:11:36.791 "memory_domains": [ 00:11:36.791 { 00:11:36.791 "dma_device_id": "system", 00:11:36.791 "dma_device_type": 1 00:11:36.791 }, 00:11:36.791 { 00:11:36.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.791 "dma_device_type": 2 00:11:36.791 } 00:11:36.791 ], 00:11:36.791 "driver_specific": {} 00:11:36.791 } 00:11:36.791 ] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 BaseBdev4 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.791 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.791 [ 00:11:36.791 { 00:11:36.791 "name": "BaseBdev4", 00:11:36.791 "aliases": [ 00:11:36.791 "b86dcf36-1475-47e4-87f9-30f6689d461a" 00:11:36.791 ], 00:11:36.791 "product_name": "Malloc disk", 00:11:36.791 "block_size": 512, 00:11:36.791 "num_blocks": 65536, 00:11:36.791 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:36.791 "assigned_rate_limits": { 00:11:36.791 "rw_ios_per_sec": 0, 00:11:36.791 "rw_mbytes_per_sec": 0, 00:11:36.791 "r_mbytes_per_sec": 0, 00:11:36.791 "w_mbytes_per_sec": 0 00:11:36.791 }, 00:11:36.791 "claimed": false, 00:11:36.791 "zoned": false, 00:11:36.791 "supported_io_types": { 00:11:36.791 "read": true, 00:11:36.791 "write": true, 00:11:36.791 "unmap": true, 00:11:36.791 "flush": true, 00:11:36.791 "reset": true, 00:11:36.791 "nvme_admin": false, 00:11:36.791 "nvme_io": false, 00:11:36.791 "nvme_io_md": false, 00:11:36.791 "write_zeroes": true, 00:11:36.791 "zcopy": true, 00:11:36.791 "get_zone_info": false, 00:11:36.791 "zone_management": false, 00:11:36.791 "zone_append": false, 00:11:36.791 "compare": false, 00:11:36.791 "compare_and_write": false, 00:11:36.791 "abort": true, 00:11:36.791 "seek_hole": false, 00:11:36.791 "seek_data": false, 00:11:37.051 "copy": true, 00:11:37.051 "nvme_iov_md": false 00:11:37.051 }, 00:11:37.051 "memory_domains": [ 00:11:37.051 { 00:11:37.051 "dma_device_id": "system", 00:11:37.051 "dma_device_type": 1 00:11:37.051 }, 00:11:37.051 { 00:11:37.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.052 "dma_device_type": 2 00:11:37.052 } 00:11:37.052 ], 00:11:37.052 "driver_specific": {} 00:11:37.052 } 00:11:37.052 ] 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.052 [2024-12-12 19:40:19.644300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.052 [2024-12-12 19:40:19.644408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.052 [2024-12-12 19:40:19.644449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.052 [2024-12-12 19:40:19.646325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.052 [2024-12-12 19:40:19.646417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.052 "name": "Existed_Raid", 00:11:37.052 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:37.052 "strip_size_kb": 0, 00:11:37.052 "state": "configuring", 00:11:37.052 "raid_level": "raid1", 00:11:37.052 "superblock": true, 00:11:37.052 "num_base_bdevs": 4, 00:11:37.052 "num_base_bdevs_discovered": 3, 00:11:37.052 "num_base_bdevs_operational": 4, 00:11:37.052 "base_bdevs_list": [ 00:11:37.052 { 00:11:37.052 "name": "BaseBdev1", 00:11:37.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.052 "is_configured": false, 00:11:37.052 "data_offset": 0, 00:11:37.052 "data_size": 0 00:11:37.052 }, 00:11:37.052 { 00:11:37.052 "name": "BaseBdev2", 00:11:37.052 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:37.052 "is_configured": true, 00:11:37.052 "data_offset": 2048, 00:11:37.052 "data_size": 63488 00:11:37.052 }, 00:11:37.052 { 00:11:37.052 "name": "BaseBdev3", 00:11:37.052 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:37.052 "is_configured": true, 00:11:37.052 "data_offset": 2048, 00:11:37.052 "data_size": 63488 00:11:37.052 }, 00:11:37.052 { 00:11:37.052 "name": "BaseBdev4", 00:11:37.052 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:37.052 "is_configured": true, 00:11:37.052 "data_offset": 2048, 00:11:37.052 "data_size": 63488 00:11:37.052 } 00:11:37.052 ] 00:11:37.052 }' 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.052 19:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.311 [2024-12-12 19:40:20.095522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.311 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.312 "name": "Existed_Raid", 00:11:37.312 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:37.312 "strip_size_kb": 0, 00:11:37.312 "state": "configuring", 00:11:37.312 "raid_level": "raid1", 00:11:37.312 "superblock": true, 00:11:37.312 "num_base_bdevs": 4, 00:11:37.312 "num_base_bdevs_discovered": 2, 00:11:37.312 "num_base_bdevs_operational": 4, 00:11:37.312 "base_bdevs_list": [ 00:11:37.312 { 00:11:37.312 "name": "BaseBdev1", 00:11:37.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.312 "is_configured": false, 00:11:37.312 "data_offset": 0, 00:11:37.312 "data_size": 0 00:11:37.312 }, 00:11:37.312 { 00:11:37.312 "name": null, 00:11:37.312 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:37.312 "is_configured": false, 00:11:37.312 "data_offset": 0, 00:11:37.312 "data_size": 63488 00:11:37.312 }, 00:11:37.312 { 00:11:37.312 "name": "BaseBdev3", 00:11:37.312 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:37.312 "is_configured": true, 00:11:37.312 "data_offset": 2048, 00:11:37.312 "data_size": 63488 00:11:37.312 }, 00:11:37.312 { 00:11:37.312 "name": "BaseBdev4", 00:11:37.312 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:37.312 "is_configured": true, 00:11:37.312 "data_offset": 2048, 00:11:37.312 "data_size": 63488 00:11:37.312 } 00:11:37.312 ] 00:11:37.312 }' 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.312 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 [2024-12-12 19:40:20.620277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.882 BaseBdev1 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 [ 00:11:37.882 { 00:11:37.882 "name": "BaseBdev1", 00:11:37.882 "aliases": [ 00:11:37.882 "fbf39b9f-8e6d-4c10-b649-df72a375f0e6" 00:11:37.882 ], 00:11:37.882 "product_name": "Malloc disk", 00:11:37.882 "block_size": 512, 00:11:37.882 "num_blocks": 65536, 00:11:37.882 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:37.882 "assigned_rate_limits": { 00:11:37.882 "rw_ios_per_sec": 0, 00:11:37.882 "rw_mbytes_per_sec": 0, 00:11:37.882 "r_mbytes_per_sec": 0, 00:11:37.882 "w_mbytes_per_sec": 0 00:11:37.882 }, 00:11:37.882 "claimed": true, 00:11:37.882 "claim_type": "exclusive_write", 00:11:37.882 "zoned": false, 00:11:37.882 "supported_io_types": { 00:11:37.882 "read": true, 00:11:37.882 "write": true, 00:11:37.882 "unmap": true, 00:11:37.882 "flush": true, 00:11:37.882 "reset": true, 00:11:37.882 "nvme_admin": false, 00:11:37.882 "nvme_io": false, 00:11:37.882 "nvme_io_md": false, 00:11:37.882 "write_zeroes": true, 00:11:37.882 "zcopy": true, 00:11:37.882 "get_zone_info": false, 00:11:37.882 "zone_management": false, 00:11:37.882 "zone_append": false, 00:11:37.882 "compare": false, 00:11:37.882 "compare_and_write": false, 00:11:37.882 "abort": true, 00:11:37.882 "seek_hole": false, 00:11:37.882 "seek_data": false, 00:11:37.882 "copy": true, 00:11:37.882 "nvme_iov_md": false 00:11:37.882 }, 00:11:37.882 "memory_domains": [ 00:11:37.882 { 00:11:37.882 "dma_device_id": "system", 00:11:37.882 "dma_device_type": 1 00:11:37.882 }, 00:11:37.882 { 00:11:37.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.882 "dma_device_type": 2 00:11:37.882 } 00:11:37.882 ], 00:11:37.882 "driver_specific": {} 00:11:37.882 } 00:11:37.882 ] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.882 "name": "Existed_Raid", 00:11:37.882 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:37.882 "strip_size_kb": 0, 00:11:37.882 "state": "configuring", 00:11:37.882 "raid_level": "raid1", 00:11:37.882 "superblock": true, 00:11:37.882 "num_base_bdevs": 4, 00:11:37.882 "num_base_bdevs_discovered": 3, 00:11:37.882 "num_base_bdevs_operational": 4, 00:11:37.882 "base_bdevs_list": [ 00:11:37.882 { 00:11:37.882 "name": "BaseBdev1", 00:11:37.882 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:37.882 "is_configured": true, 00:11:37.882 "data_offset": 2048, 00:11:37.882 "data_size": 63488 00:11:37.882 }, 00:11:37.882 { 00:11:37.882 "name": null, 00:11:37.882 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:37.882 "is_configured": false, 00:11:37.882 "data_offset": 0, 00:11:37.882 "data_size": 63488 00:11:37.882 }, 00:11:37.882 { 00:11:37.882 "name": "BaseBdev3", 00:11:37.882 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:37.882 "is_configured": true, 00:11:37.882 "data_offset": 2048, 00:11:37.882 "data_size": 63488 00:11:37.882 }, 00:11:37.882 { 00:11:37.882 "name": "BaseBdev4", 00:11:37.882 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:37.882 "is_configured": true, 00:11:37.882 "data_offset": 2048, 00:11:37.882 "data_size": 63488 00:11:37.882 } 00:11:37.882 ] 00:11:37.882 }' 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.882 19:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 [2024-12-12 19:40:21.135527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.450 "name": "Existed_Raid", 00:11:38.450 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:38.450 "strip_size_kb": 0, 00:11:38.450 "state": "configuring", 00:11:38.450 "raid_level": "raid1", 00:11:38.450 "superblock": true, 00:11:38.450 "num_base_bdevs": 4, 00:11:38.450 "num_base_bdevs_discovered": 2, 00:11:38.450 "num_base_bdevs_operational": 4, 00:11:38.450 "base_bdevs_list": [ 00:11:38.450 { 00:11:38.450 "name": "BaseBdev1", 00:11:38.450 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:38.450 "is_configured": true, 00:11:38.450 "data_offset": 2048, 00:11:38.450 "data_size": 63488 00:11:38.450 }, 00:11:38.450 { 00:11:38.450 "name": null, 00:11:38.450 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:38.450 "is_configured": false, 00:11:38.450 "data_offset": 0, 00:11:38.450 "data_size": 63488 00:11:38.450 }, 00:11:38.450 { 00:11:38.450 "name": null, 00:11:38.450 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:38.450 "is_configured": false, 00:11:38.450 "data_offset": 0, 00:11:38.450 "data_size": 63488 00:11:38.450 }, 00:11:38.450 { 00:11:38.450 "name": "BaseBdev4", 00:11:38.450 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:38.450 "is_configured": true, 00:11:38.450 "data_offset": 2048, 00:11:38.450 "data_size": 63488 00:11:38.450 } 00:11:38.450 ] 00:11:38.450 }' 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.450 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.709 [2024-12-12 19:40:21.542843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.709 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.968 "name": "Existed_Raid", 00:11:38.968 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:38.968 "strip_size_kb": 0, 00:11:38.968 "state": "configuring", 00:11:38.968 "raid_level": "raid1", 00:11:38.968 "superblock": true, 00:11:38.968 "num_base_bdevs": 4, 00:11:38.968 "num_base_bdevs_discovered": 3, 00:11:38.968 "num_base_bdevs_operational": 4, 00:11:38.968 "base_bdevs_list": [ 00:11:38.968 { 00:11:38.968 "name": "BaseBdev1", 00:11:38.968 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:38.968 "is_configured": true, 00:11:38.968 "data_offset": 2048, 00:11:38.968 "data_size": 63488 00:11:38.968 }, 00:11:38.968 { 00:11:38.968 "name": null, 00:11:38.968 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:38.968 "is_configured": false, 00:11:38.968 "data_offset": 0, 00:11:38.968 "data_size": 63488 00:11:38.968 }, 00:11:38.968 { 00:11:38.968 "name": "BaseBdev3", 00:11:38.968 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:38.968 "is_configured": true, 00:11:38.968 "data_offset": 2048, 00:11:38.968 "data_size": 63488 00:11:38.968 }, 00:11:38.968 { 00:11:38.968 "name": "BaseBdev4", 00:11:38.968 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:38.968 "is_configured": true, 00:11:38.968 "data_offset": 2048, 00:11:38.968 "data_size": 63488 00:11:38.968 } 00:11:38.968 ] 00:11:38.968 }' 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.968 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.226 19:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 [2024-12-12 19:40:21.998155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.484 "name": "Existed_Raid", 00:11:39.484 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:39.484 "strip_size_kb": 0, 00:11:39.484 "state": "configuring", 00:11:39.484 "raid_level": "raid1", 00:11:39.484 "superblock": true, 00:11:39.484 "num_base_bdevs": 4, 00:11:39.484 "num_base_bdevs_discovered": 2, 00:11:39.484 "num_base_bdevs_operational": 4, 00:11:39.484 "base_bdevs_list": [ 00:11:39.484 { 00:11:39.484 "name": null, 00:11:39.484 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:39.484 "is_configured": false, 00:11:39.484 "data_offset": 0, 00:11:39.484 "data_size": 63488 00:11:39.484 }, 00:11:39.484 { 00:11:39.484 "name": null, 00:11:39.484 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:39.484 "is_configured": false, 00:11:39.484 "data_offset": 0, 00:11:39.484 "data_size": 63488 00:11:39.484 }, 00:11:39.484 { 00:11:39.484 "name": "BaseBdev3", 00:11:39.484 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:39.484 "is_configured": true, 00:11:39.484 "data_offset": 2048, 00:11:39.484 "data_size": 63488 00:11:39.484 }, 00:11:39.484 { 00:11:39.484 "name": "BaseBdev4", 00:11:39.484 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:39.484 "is_configured": true, 00:11:39.484 "data_offset": 2048, 00:11:39.484 "data_size": 63488 00:11:39.484 } 00:11:39.484 ] 00:11:39.484 }' 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.484 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.742 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.742 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.742 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 [2024-12-12 19:40:22.617408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.001 "name": "Existed_Raid", 00:11:40.001 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:40.001 "strip_size_kb": 0, 00:11:40.001 "state": "configuring", 00:11:40.001 "raid_level": "raid1", 00:11:40.001 "superblock": true, 00:11:40.001 "num_base_bdevs": 4, 00:11:40.001 "num_base_bdevs_discovered": 3, 00:11:40.001 "num_base_bdevs_operational": 4, 00:11:40.001 "base_bdevs_list": [ 00:11:40.001 { 00:11:40.001 "name": null, 00:11:40.001 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:40.001 "is_configured": false, 00:11:40.001 "data_offset": 0, 00:11:40.001 "data_size": 63488 00:11:40.001 }, 00:11:40.001 { 00:11:40.001 "name": "BaseBdev2", 00:11:40.001 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:40.001 "is_configured": true, 00:11:40.001 "data_offset": 2048, 00:11:40.001 "data_size": 63488 00:11:40.001 }, 00:11:40.001 { 00:11:40.001 "name": "BaseBdev3", 00:11:40.001 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:40.001 "is_configured": true, 00:11:40.001 "data_offset": 2048, 00:11:40.001 "data_size": 63488 00:11:40.001 }, 00:11:40.001 { 00:11:40.001 "name": "BaseBdev4", 00:11:40.001 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:40.001 "is_configured": true, 00:11:40.001 "data_offset": 2048, 00:11:40.001 "data_size": 63488 00:11:40.001 } 00:11:40.001 ] 00:11:40.001 }' 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.001 19:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fbf39b9f-8e6d-4c10-b649-df72a375f0e6 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.260 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.519 [2024-12-12 19:40:23.150690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.519 [2024-12-12 19:40:23.151159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.519 [2024-12-12 19:40:23.151226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.519 [2024-12-12 19:40:23.151630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:40.519 NewBaseBdev 00:11:40.519 [2024-12-12 19:40:23.151968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.519 [2024-12-12 19:40:23.152024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.519 [2024-12-12 19:40:23.152299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.519 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.519 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 [ 00:11:40.520 { 00:11:40.520 "name": "NewBaseBdev", 00:11:40.520 "aliases": [ 00:11:40.520 "fbf39b9f-8e6d-4c10-b649-df72a375f0e6" 00:11:40.520 ], 00:11:40.520 "product_name": "Malloc disk", 00:11:40.520 "block_size": 512, 00:11:40.520 "num_blocks": 65536, 00:11:40.520 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:40.520 "assigned_rate_limits": { 00:11:40.520 "rw_ios_per_sec": 0, 00:11:40.520 "rw_mbytes_per_sec": 0, 00:11:40.520 "r_mbytes_per_sec": 0, 00:11:40.520 "w_mbytes_per_sec": 0 00:11:40.520 }, 00:11:40.520 "claimed": true, 00:11:40.520 "claim_type": "exclusive_write", 00:11:40.520 "zoned": false, 00:11:40.520 "supported_io_types": { 00:11:40.520 "read": true, 00:11:40.520 "write": true, 00:11:40.520 "unmap": true, 00:11:40.520 "flush": true, 00:11:40.520 "reset": true, 00:11:40.520 "nvme_admin": false, 00:11:40.520 "nvme_io": false, 00:11:40.520 "nvme_io_md": false, 00:11:40.520 "write_zeroes": true, 00:11:40.520 "zcopy": true, 00:11:40.520 "get_zone_info": false, 00:11:40.520 "zone_management": false, 00:11:40.520 "zone_append": false, 00:11:40.520 "compare": false, 00:11:40.520 "compare_and_write": false, 00:11:40.520 "abort": true, 00:11:40.520 "seek_hole": false, 00:11:40.520 "seek_data": false, 00:11:40.520 "copy": true, 00:11:40.520 "nvme_iov_md": false 00:11:40.520 }, 00:11:40.520 "memory_domains": [ 00:11:40.520 { 00:11:40.520 "dma_device_id": "system", 00:11:40.520 "dma_device_type": 1 00:11:40.520 }, 00:11:40.520 { 00:11:40.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.520 "dma_device_type": 2 00:11:40.520 } 00:11:40.520 ], 00:11:40.520 "driver_specific": {} 00:11:40.520 } 00:11:40.520 ] 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.520 "name": "Existed_Raid", 00:11:40.520 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:40.520 "strip_size_kb": 0, 00:11:40.520 "state": "online", 00:11:40.520 "raid_level": "raid1", 00:11:40.520 "superblock": true, 00:11:40.520 "num_base_bdevs": 4, 00:11:40.520 "num_base_bdevs_discovered": 4, 00:11:40.520 "num_base_bdevs_operational": 4, 00:11:40.520 "base_bdevs_list": [ 00:11:40.520 { 00:11:40.520 "name": "NewBaseBdev", 00:11:40.520 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:40.520 "is_configured": true, 00:11:40.520 "data_offset": 2048, 00:11:40.520 "data_size": 63488 00:11:40.520 }, 00:11:40.520 { 00:11:40.520 "name": "BaseBdev2", 00:11:40.520 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:40.520 "is_configured": true, 00:11:40.520 "data_offset": 2048, 00:11:40.520 "data_size": 63488 00:11:40.520 }, 00:11:40.520 { 00:11:40.520 "name": "BaseBdev3", 00:11:40.520 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:40.520 "is_configured": true, 00:11:40.520 "data_offset": 2048, 00:11:40.520 "data_size": 63488 00:11:40.520 }, 00:11:40.520 { 00:11:40.520 "name": "BaseBdev4", 00:11:40.520 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:40.520 "is_configured": true, 00:11:40.520 "data_offset": 2048, 00:11:40.520 "data_size": 63488 00:11:40.520 } 00:11:40.520 ] 00:11:40.520 }' 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.520 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.779 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.039 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.039 [2024-12-12 19:40:23.630307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.039 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.039 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.039 "name": "Existed_Raid", 00:11:41.039 "aliases": [ 00:11:41.039 "6b56f25e-744c-43f1-9b9f-098d5ea7b266" 00:11:41.039 ], 00:11:41.039 "product_name": "Raid Volume", 00:11:41.039 "block_size": 512, 00:11:41.039 "num_blocks": 63488, 00:11:41.039 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:41.039 "assigned_rate_limits": { 00:11:41.039 "rw_ios_per_sec": 0, 00:11:41.039 "rw_mbytes_per_sec": 0, 00:11:41.039 "r_mbytes_per_sec": 0, 00:11:41.039 "w_mbytes_per_sec": 0 00:11:41.039 }, 00:11:41.039 "claimed": false, 00:11:41.039 "zoned": false, 00:11:41.039 "supported_io_types": { 00:11:41.039 "read": true, 00:11:41.039 "write": true, 00:11:41.039 "unmap": false, 00:11:41.039 "flush": false, 00:11:41.039 "reset": true, 00:11:41.039 "nvme_admin": false, 00:11:41.039 "nvme_io": false, 00:11:41.039 "nvme_io_md": false, 00:11:41.039 "write_zeroes": true, 00:11:41.039 "zcopy": false, 00:11:41.039 "get_zone_info": false, 00:11:41.039 "zone_management": false, 00:11:41.039 "zone_append": false, 00:11:41.039 "compare": false, 00:11:41.039 "compare_and_write": false, 00:11:41.039 "abort": false, 00:11:41.039 "seek_hole": false, 00:11:41.039 "seek_data": false, 00:11:41.039 "copy": false, 00:11:41.039 "nvme_iov_md": false 00:11:41.039 }, 00:11:41.039 "memory_domains": [ 00:11:41.039 { 00:11:41.039 "dma_device_id": "system", 00:11:41.039 "dma_device_type": 1 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.039 "dma_device_type": 2 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "system", 00:11:41.039 "dma_device_type": 1 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.039 "dma_device_type": 2 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "system", 00:11:41.039 "dma_device_type": 1 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.039 "dma_device_type": 2 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "system", 00:11:41.039 "dma_device_type": 1 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.039 "dma_device_type": 2 00:11:41.039 } 00:11:41.039 ], 00:11:41.039 "driver_specific": { 00:11:41.039 "raid": { 00:11:41.039 "uuid": "6b56f25e-744c-43f1-9b9f-098d5ea7b266", 00:11:41.039 "strip_size_kb": 0, 00:11:41.039 "state": "online", 00:11:41.039 "raid_level": "raid1", 00:11:41.039 "superblock": true, 00:11:41.039 "num_base_bdevs": 4, 00:11:41.039 "num_base_bdevs_discovered": 4, 00:11:41.039 "num_base_bdevs_operational": 4, 00:11:41.039 "base_bdevs_list": [ 00:11:41.039 { 00:11:41.039 "name": "NewBaseBdev", 00:11:41.039 "uuid": "fbf39b9f-8e6d-4c10-b649-df72a375f0e6", 00:11:41.039 "is_configured": true, 00:11:41.039 "data_offset": 2048, 00:11:41.039 "data_size": 63488 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "name": "BaseBdev2", 00:11:41.039 "uuid": "f90ba654-8f9c-4eaa-9a22-df3ee7c61f37", 00:11:41.039 "is_configured": true, 00:11:41.039 "data_offset": 2048, 00:11:41.039 "data_size": 63488 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "name": "BaseBdev3", 00:11:41.039 "uuid": "316829ac-668a-4432-a8aa-e0b2ba646bb4", 00:11:41.039 "is_configured": true, 00:11:41.039 "data_offset": 2048, 00:11:41.039 "data_size": 63488 00:11:41.039 }, 00:11:41.039 { 00:11:41.039 "name": "BaseBdev4", 00:11:41.039 "uuid": "b86dcf36-1475-47e4-87f9-30f6689d461a", 00:11:41.040 "is_configured": true, 00:11:41.040 "data_offset": 2048, 00:11:41.040 "data_size": 63488 00:11:41.040 } 00:11:41.040 ] 00:11:41.040 } 00:11:41.040 } 00:11:41.040 }' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.040 BaseBdev2 00:11:41.040 BaseBdev3 00:11:41.040 BaseBdev4' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.040 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.300 [2024-12-12 19:40:23.961484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.300 [2024-12-12 19:40:23.961627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.300 [2024-12-12 19:40:23.961791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.300 [2024-12-12 19:40:23.962205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.300 [2024-12-12 19:40:23.962283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75559 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75559 ']' 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75559 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.300 19:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75559 00:11:41.300 19:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.300 19:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.300 19:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75559' 00:11:41.300 killing process with pid 75559 00:11:41.300 19:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75559 00:11:41.300 [2024-12-12 19:40:24.012562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.300 19:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75559 00:11:41.869 [2024-12-12 19:40:24.476122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.272 19:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.272 00:11:43.272 real 0m11.809s 00:11:43.272 user 0m18.329s 00:11:43.272 sys 0m2.267s 00:11:43.272 19:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.272 19:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.272 ************************************ 00:11:43.272 END TEST raid_state_function_test_sb 00:11:43.272 ************************************ 00:11:43.272 19:40:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:43.272 19:40:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.272 19:40:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.272 19:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.272 ************************************ 00:11:43.272 START TEST raid_superblock_test 00:11:43.272 ************************************ 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76224 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76224 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76224 ']' 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.272 19:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.272 [2024-12-12 19:40:25.944277] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:43.272 [2024-12-12 19:40:25.944478] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76224 ] 00:11:43.531 [2024-12-12 19:40:26.119387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.531 [2024-12-12 19:40:26.263404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.789 [2024-12-12 19:40:26.529206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.789 [2024-12-12 19:40:26.529407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.048 malloc1 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.048 [2024-12-12 19:40:26.842789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.048 [2024-12-12 19:40:26.842960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.048 [2024-12-12 19:40:26.843011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:44.048 [2024-12-12 19:40:26.843064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.048 [2024-12-12 19:40:26.845648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.048 [2024-12-12 19:40:26.845762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.048 pt1 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.048 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.307 malloc2 00:11:44.307 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.307 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.307 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.307 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 [2024-12-12 19:40:26.907032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.308 [2024-12-12 19:40:26.907195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.308 [2024-12-12 19:40:26.907247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:44.308 [2024-12-12 19:40:26.907290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.308 [2024-12-12 19:40:26.909942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.308 [2024-12-12 19:40:26.910032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.308 pt2 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 malloc3 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 [2024-12-12 19:40:26.980557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.308 [2024-12-12 19:40:26.980712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.308 [2024-12-12 19:40:26.980760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:44.308 [2024-12-12 19:40:26.980804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.308 [2024-12-12 19:40:26.983410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.308 [2024-12-12 19:40:26.983506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.308 pt3 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 malloc4 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 [2024-12-12 19:40:27.041308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.308 [2024-12-12 19:40:27.041470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.308 [2024-12-12 19:40:27.041520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:44.308 [2024-12-12 19:40:27.041576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.308 [2024-12-12 19:40:27.044192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.308 [2024-12-12 19:40:27.044300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.308 pt4 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 [2024-12-12 19:40:27.053313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:44.308 [2024-12-12 19:40:27.055564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.308 [2024-12-12 19:40:27.055688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.308 [2024-12-12 19:40:27.055795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:44.308 [2024-12-12 19:40:27.056086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:44.308 [2024-12-12 19:40:27.056149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.308 [2024-12-12 19:40:27.056486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.308 [2024-12-12 19:40:27.056771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:44.308 [2024-12-12 19:40:27.056835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:44.308 [2024-12-12 19:40:27.057126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.308 "name": "raid_bdev1", 00:11:44.308 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:44.308 "strip_size_kb": 0, 00:11:44.308 "state": "online", 00:11:44.308 "raid_level": "raid1", 00:11:44.308 "superblock": true, 00:11:44.308 "num_base_bdevs": 4, 00:11:44.308 "num_base_bdevs_discovered": 4, 00:11:44.308 "num_base_bdevs_operational": 4, 00:11:44.308 "base_bdevs_list": [ 00:11:44.308 { 00:11:44.308 "name": "pt1", 00:11:44.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.308 "is_configured": true, 00:11:44.308 "data_offset": 2048, 00:11:44.308 "data_size": 63488 00:11:44.308 }, 00:11:44.308 { 00:11:44.308 "name": "pt2", 00:11:44.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.308 "is_configured": true, 00:11:44.308 "data_offset": 2048, 00:11:44.308 "data_size": 63488 00:11:44.308 }, 00:11:44.308 { 00:11:44.308 "name": "pt3", 00:11:44.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.308 "is_configured": true, 00:11:44.308 "data_offset": 2048, 00:11:44.308 "data_size": 63488 00:11:44.308 }, 00:11:44.308 { 00:11:44.308 "name": "pt4", 00:11:44.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.308 "is_configured": true, 00:11:44.308 "data_offset": 2048, 00:11:44.308 "data_size": 63488 00:11:44.308 } 00:11:44.308 ] 00:11:44.308 }' 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.308 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.875 [2024-12-12 19:40:27.469125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.875 "name": "raid_bdev1", 00:11:44.875 "aliases": [ 00:11:44.875 "afa2bdb3-dd45-464a-875e-ded6880c2550" 00:11:44.875 ], 00:11:44.875 "product_name": "Raid Volume", 00:11:44.875 "block_size": 512, 00:11:44.875 "num_blocks": 63488, 00:11:44.875 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:44.875 "assigned_rate_limits": { 00:11:44.875 "rw_ios_per_sec": 0, 00:11:44.875 "rw_mbytes_per_sec": 0, 00:11:44.875 "r_mbytes_per_sec": 0, 00:11:44.875 "w_mbytes_per_sec": 0 00:11:44.875 }, 00:11:44.875 "claimed": false, 00:11:44.875 "zoned": false, 00:11:44.875 "supported_io_types": { 00:11:44.875 "read": true, 00:11:44.875 "write": true, 00:11:44.875 "unmap": false, 00:11:44.875 "flush": false, 00:11:44.875 "reset": true, 00:11:44.875 "nvme_admin": false, 00:11:44.875 "nvme_io": false, 00:11:44.875 "nvme_io_md": false, 00:11:44.875 "write_zeroes": true, 00:11:44.875 "zcopy": false, 00:11:44.875 "get_zone_info": false, 00:11:44.875 "zone_management": false, 00:11:44.875 "zone_append": false, 00:11:44.875 "compare": false, 00:11:44.875 "compare_and_write": false, 00:11:44.875 "abort": false, 00:11:44.875 "seek_hole": false, 00:11:44.875 "seek_data": false, 00:11:44.875 "copy": false, 00:11:44.875 "nvme_iov_md": false 00:11:44.875 }, 00:11:44.875 "memory_domains": [ 00:11:44.875 { 00:11:44.875 "dma_device_id": "system", 00:11:44.875 "dma_device_type": 1 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.875 "dma_device_type": 2 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "system", 00:11:44.875 "dma_device_type": 1 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.875 "dma_device_type": 2 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "system", 00:11:44.875 "dma_device_type": 1 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.875 "dma_device_type": 2 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "system", 00:11:44.875 "dma_device_type": 1 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.875 "dma_device_type": 2 00:11:44.875 } 00:11:44.875 ], 00:11:44.875 "driver_specific": { 00:11:44.875 "raid": { 00:11:44.875 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:44.875 "strip_size_kb": 0, 00:11:44.875 "state": "online", 00:11:44.875 "raid_level": "raid1", 00:11:44.875 "superblock": true, 00:11:44.875 "num_base_bdevs": 4, 00:11:44.875 "num_base_bdevs_discovered": 4, 00:11:44.875 "num_base_bdevs_operational": 4, 00:11:44.875 "base_bdevs_list": [ 00:11:44.875 { 00:11:44.875 "name": "pt1", 00:11:44.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 2048, 00:11:44.875 "data_size": 63488 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "name": "pt2", 00:11:44.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 2048, 00:11:44.875 "data_size": 63488 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "name": "pt3", 00:11:44.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 2048, 00:11:44.875 "data_size": 63488 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "name": "pt4", 00:11:44.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 2048, 00:11:44.875 "data_size": 63488 00:11:44.875 } 00:11:44.875 ] 00:11:44.875 } 00:11:44.875 } 00:11:44.875 }' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:44.875 pt2 00:11:44.875 pt3 00:11:44.875 pt4' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.875 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.876 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 [2024-12-12 19:40:27.780641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=afa2bdb3-dd45-464a-875e-ded6880c2550 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z afa2bdb3-dd45-464a-875e-ded6880c2550 ']' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 [2024-12-12 19:40:27.824186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.135 [2024-12-12 19:40:27.824232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.135 [2024-12-12 19:40:27.824360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.135 [2024-12-12 19:40:27.824467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.135 [2024-12-12 19:40:27.824491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 [2024-12-12 19:40:27.995908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:45.394 [2024-12-12 19:40:27.998488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:45.394 [2024-12-12 19:40:27.998582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:45.394 [2024-12-12 19:40:27.998632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:45.394 [2024-12-12 19:40:27.998706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:45.394 [2024-12-12 19:40:27.998778] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:45.394 [2024-12-12 19:40:27.998809] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:45.394 [2024-12-12 19:40:27.998836] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:45.394 [2024-12-12 19:40:27.998855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.394 [2024-12-12 19:40:27.998872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:45.394 request: 00:11:45.394 { 00:11:45.394 "name": "raid_bdev1", 00:11:45.394 "raid_level": "raid1", 00:11:45.394 "base_bdevs": [ 00:11:45.394 "malloc1", 00:11:45.394 "malloc2", 00:11:45.394 "malloc3", 00:11:45.394 "malloc4" 00:11:45.394 ], 00:11:45.394 "superblock": false, 00:11:45.394 "method": "bdev_raid_create", 00:11:45.394 "req_id": 1 00:11:45.394 } 00:11:45.394 Got JSON-RPC error response 00:11:45.394 response: 00:11:45.395 { 00:11:45.395 "code": -17, 00:11:45.395 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:45.395 } 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.395 [2024-12-12 19:40:28.059768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.395 [2024-12-12 19:40:28.059853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.395 [2024-12-12 19:40:28.059877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.395 [2024-12-12 19:40:28.059892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.395 [2024-12-12 19:40:28.062585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.395 [2024-12-12 19:40:28.062638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.395 [2024-12-12 19:40:28.062750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.395 [2024-12-12 19:40:28.062825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.395 pt1 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.395 "name": "raid_bdev1", 00:11:45.395 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:45.395 "strip_size_kb": 0, 00:11:45.395 "state": "configuring", 00:11:45.395 "raid_level": "raid1", 00:11:45.395 "superblock": true, 00:11:45.395 "num_base_bdevs": 4, 00:11:45.395 "num_base_bdevs_discovered": 1, 00:11:45.395 "num_base_bdevs_operational": 4, 00:11:45.395 "base_bdevs_list": [ 00:11:45.395 { 00:11:45.395 "name": "pt1", 00:11:45.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.395 "is_configured": true, 00:11:45.395 "data_offset": 2048, 00:11:45.395 "data_size": 63488 00:11:45.395 }, 00:11:45.395 { 00:11:45.395 "name": null, 00:11:45.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.395 "is_configured": false, 00:11:45.395 "data_offset": 2048, 00:11:45.395 "data_size": 63488 00:11:45.395 }, 00:11:45.395 { 00:11:45.395 "name": null, 00:11:45.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.395 "is_configured": false, 00:11:45.395 "data_offset": 2048, 00:11:45.395 "data_size": 63488 00:11:45.395 }, 00:11:45.395 { 00:11:45.395 "name": null, 00:11:45.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.395 "is_configured": false, 00:11:45.395 "data_offset": 2048, 00:11:45.395 "data_size": 63488 00:11:45.395 } 00:11:45.395 ] 00:11:45.395 }' 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.395 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 [2024-12-12 19:40:28.435208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.654 [2024-12-12 19:40:28.435342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.654 [2024-12-12 19:40:28.435376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:45.654 [2024-12-12 19:40:28.435394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.654 [2024-12-12 19:40:28.436065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.654 [2024-12-12 19:40:28.436116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.654 [2024-12-12 19:40:28.436264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:45.654 [2024-12-12 19:40:28.436313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.654 pt2 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 [2024-12-12 19:40:28.443159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.654 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.913 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.913 "name": "raid_bdev1", 00:11:45.913 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:45.913 "strip_size_kb": 0, 00:11:45.913 "state": "configuring", 00:11:45.913 "raid_level": "raid1", 00:11:45.913 "superblock": true, 00:11:45.913 "num_base_bdevs": 4, 00:11:45.913 "num_base_bdevs_discovered": 1, 00:11:45.913 "num_base_bdevs_operational": 4, 00:11:45.913 "base_bdevs_list": [ 00:11:45.913 { 00:11:45.913 "name": "pt1", 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.913 "is_configured": true, 00:11:45.913 "data_offset": 2048, 00:11:45.913 "data_size": 63488 00:11:45.913 }, 00:11:45.913 { 00:11:45.913 "name": null, 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.913 "is_configured": false, 00:11:45.913 "data_offset": 0, 00:11:45.913 "data_size": 63488 00:11:45.913 }, 00:11:45.913 { 00:11:45.913 "name": null, 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.913 "is_configured": false, 00:11:45.913 "data_offset": 2048, 00:11:45.913 "data_size": 63488 00:11:45.913 }, 00:11:45.913 { 00:11:45.913 "name": null, 00:11:45.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.913 "is_configured": false, 00:11:45.913 "data_offset": 2048, 00:11:45.913 "data_size": 63488 00:11:45.913 } 00:11:45.913 ] 00:11:45.913 }' 00:11:45.913 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.913 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.172 [2024-12-12 19:40:28.870454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.172 [2024-12-12 19:40:28.870576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.172 [2024-12-12 19:40:28.870609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.172 [2024-12-12 19:40:28.870623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.172 [2024-12-12 19:40:28.871254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.172 [2024-12-12 19:40:28.871300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.172 [2024-12-12 19:40:28.871415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.172 [2024-12-12 19:40:28.871467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.172 pt2 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.172 [2024-12-12 19:40:28.882359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.172 [2024-12-12 19:40:28.882432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.172 [2024-12-12 19:40:28.882457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:46.172 [2024-12-12 19:40:28.882468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.172 [2024-12-12 19:40:28.882989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.172 [2024-12-12 19:40:28.883026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.172 [2024-12-12 19:40:28.883117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.172 [2024-12-12 19:40:28.883153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.172 pt3 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.172 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.173 [2024-12-12 19:40:28.894322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:46.173 [2024-12-12 19:40:28.894381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.173 [2024-12-12 19:40:28.894404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:46.173 [2024-12-12 19:40:28.894415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.173 [2024-12-12 19:40:28.894901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.173 [2024-12-12 19:40:28.894932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:46.173 [2024-12-12 19:40:28.895017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:46.173 [2024-12-12 19:40:28.895052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:46.173 [2024-12-12 19:40:28.895255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.173 [2024-12-12 19:40:28.895276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.173 [2024-12-12 19:40:28.895585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.173 [2024-12-12 19:40:28.895788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.173 [2024-12-12 19:40:28.895814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.173 [2024-12-12 19:40:28.895993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.173 pt4 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.173 "name": "raid_bdev1", 00:11:46.173 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:46.173 "strip_size_kb": 0, 00:11:46.173 "state": "online", 00:11:46.173 "raid_level": "raid1", 00:11:46.173 "superblock": true, 00:11:46.173 "num_base_bdevs": 4, 00:11:46.173 "num_base_bdevs_discovered": 4, 00:11:46.173 "num_base_bdevs_operational": 4, 00:11:46.173 "base_bdevs_list": [ 00:11:46.173 { 00:11:46.173 "name": "pt1", 00:11:46.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.173 "is_configured": true, 00:11:46.173 "data_offset": 2048, 00:11:46.173 "data_size": 63488 00:11:46.173 }, 00:11:46.173 { 00:11:46.173 "name": "pt2", 00:11:46.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.173 "is_configured": true, 00:11:46.173 "data_offset": 2048, 00:11:46.173 "data_size": 63488 00:11:46.173 }, 00:11:46.173 { 00:11:46.173 "name": "pt3", 00:11:46.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.173 "is_configured": true, 00:11:46.173 "data_offset": 2048, 00:11:46.173 "data_size": 63488 00:11:46.173 }, 00:11:46.173 { 00:11:46.173 "name": "pt4", 00:11:46.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.173 "is_configured": true, 00:11:46.173 "data_offset": 2048, 00:11:46.173 "data_size": 63488 00:11:46.173 } 00:11:46.173 ] 00:11:46.173 }' 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.173 19:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.693 [2024-12-12 19:40:29.278317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.693 "name": "raid_bdev1", 00:11:46.693 "aliases": [ 00:11:46.693 "afa2bdb3-dd45-464a-875e-ded6880c2550" 00:11:46.693 ], 00:11:46.693 "product_name": "Raid Volume", 00:11:46.693 "block_size": 512, 00:11:46.693 "num_blocks": 63488, 00:11:46.693 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:46.693 "assigned_rate_limits": { 00:11:46.693 "rw_ios_per_sec": 0, 00:11:46.693 "rw_mbytes_per_sec": 0, 00:11:46.693 "r_mbytes_per_sec": 0, 00:11:46.693 "w_mbytes_per_sec": 0 00:11:46.693 }, 00:11:46.693 "claimed": false, 00:11:46.693 "zoned": false, 00:11:46.693 "supported_io_types": { 00:11:46.693 "read": true, 00:11:46.693 "write": true, 00:11:46.693 "unmap": false, 00:11:46.693 "flush": false, 00:11:46.693 "reset": true, 00:11:46.693 "nvme_admin": false, 00:11:46.693 "nvme_io": false, 00:11:46.693 "nvme_io_md": false, 00:11:46.693 "write_zeroes": true, 00:11:46.693 "zcopy": false, 00:11:46.693 "get_zone_info": false, 00:11:46.693 "zone_management": false, 00:11:46.693 "zone_append": false, 00:11:46.693 "compare": false, 00:11:46.693 "compare_and_write": false, 00:11:46.693 "abort": false, 00:11:46.693 "seek_hole": false, 00:11:46.693 "seek_data": false, 00:11:46.693 "copy": false, 00:11:46.693 "nvme_iov_md": false 00:11:46.693 }, 00:11:46.693 "memory_domains": [ 00:11:46.693 { 00:11:46.693 "dma_device_id": "system", 00:11:46.693 "dma_device_type": 1 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.693 "dma_device_type": 2 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "system", 00:11:46.693 "dma_device_type": 1 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.693 "dma_device_type": 2 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "system", 00:11:46.693 "dma_device_type": 1 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.693 "dma_device_type": 2 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "system", 00:11:46.693 "dma_device_type": 1 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.693 "dma_device_type": 2 00:11:46.693 } 00:11:46.693 ], 00:11:46.693 "driver_specific": { 00:11:46.693 "raid": { 00:11:46.693 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:46.693 "strip_size_kb": 0, 00:11:46.693 "state": "online", 00:11:46.693 "raid_level": "raid1", 00:11:46.693 "superblock": true, 00:11:46.693 "num_base_bdevs": 4, 00:11:46.693 "num_base_bdevs_discovered": 4, 00:11:46.693 "num_base_bdevs_operational": 4, 00:11:46.693 "base_bdevs_list": [ 00:11:46.693 { 00:11:46.693 "name": "pt1", 00:11:46.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.693 "is_configured": true, 00:11:46.693 "data_offset": 2048, 00:11:46.693 "data_size": 63488 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "name": "pt2", 00:11:46.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.693 "is_configured": true, 00:11:46.693 "data_offset": 2048, 00:11:46.693 "data_size": 63488 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "name": "pt3", 00:11:46.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.693 "is_configured": true, 00:11:46.693 "data_offset": 2048, 00:11:46.693 "data_size": 63488 00:11:46.693 }, 00:11:46.693 { 00:11:46.693 "name": "pt4", 00:11:46.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.693 "is_configured": true, 00:11:46.693 "data_offset": 2048, 00:11:46.693 "data_size": 63488 00:11:46.693 } 00:11:46.693 ] 00:11:46.693 } 00:11:46.693 } 00:11:46.693 }' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.693 pt2 00:11:46.693 pt3 00:11:46.693 pt4' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.693 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.694 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.954 [2024-12-12 19:40:29.537898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' afa2bdb3-dd45-464a-875e-ded6880c2550 '!=' afa2bdb3-dd45-464a-875e-ded6880c2550 ']' 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.954 [2024-12-12 19:40:29.577503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.954 "name": "raid_bdev1", 00:11:46.954 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:46.954 "strip_size_kb": 0, 00:11:46.954 "state": "online", 00:11:46.954 "raid_level": "raid1", 00:11:46.954 "superblock": true, 00:11:46.954 "num_base_bdevs": 4, 00:11:46.954 "num_base_bdevs_discovered": 3, 00:11:46.954 "num_base_bdevs_operational": 3, 00:11:46.954 "base_bdevs_list": [ 00:11:46.954 { 00:11:46.954 "name": null, 00:11:46.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.954 "is_configured": false, 00:11:46.954 "data_offset": 0, 00:11:46.954 "data_size": 63488 00:11:46.954 }, 00:11:46.954 { 00:11:46.954 "name": "pt2", 00:11:46.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.954 "is_configured": true, 00:11:46.954 "data_offset": 2048, 00:11:46.954 "data_size": 63488 00:11:46.954 }, 00:11:46.954 { 00:11:46.954 "name": "pt3", 00:11:46.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.954 "is_configured": true, 00:11:46.954 "data_offset": 2048, 00:11:46.954 "data_size": 63488 00:11:46.954 }, 00:11:46.954 { 00:11:46.954 "name": "pt4", 00:11:46.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.954 "is_configured": true, 00:11:46.954 "data_offset": 2048, 00:11:46.954 "data_size": 63488 00:11:46.954 } 00:11:46.954 ] 00:11:46.954 }' 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.954 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 [2024-12-12 19:40:29.984757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.214 [2024-12-12 19:40:29.984813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.214 [2024-12-12 19:40:29.984959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.214 [2024-12-12 19:40:29.985066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.214 [2024-12-12 19:40:29.985079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 19:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.214 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.474 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.475 [2024-12-12 19:40:30.076564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.475 [2024-12-12 19:40:30.076647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.475 [2024-12-12 19:40:30.076674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:47.475 [2024-12-12 19:40:30.076686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.475 [2024-12-12 19:40:30.079593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.475 [2024-12-12 19:40:30.079639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.475 [2024-12-12 19:40:30.079757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.475 [2024-12-12 19:40:30.079824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.475 pt2 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.475 "name": "raid_bdev1", 00:11:47.475 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:47.475 "strip_size_kb": 0, 00:11:47.475 "state": "configuring", 00:11:47.475 "raid_level": "raid1", 00:11:47.475 "superblock": true, 00:11:47.475 "num_base_bdevs": 4, 00:11:47.475 "num_base_bdevs_discovered": 1, 00:11:47.475 "num_base_bdevs_operational": 3, 00:11:47.475 "base_bdevs_list": [ 00:11:47.475 { 00:11:47.475 "name": null, 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.475 "is_configured": false, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": "pt2", 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.475 "is_configured": true, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": null, 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.475 "is_configured": false, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": null, 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.475 "is_configured": false, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 } 00:11:47.475 ] 00:11:47.475 }' 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.475 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.735 [2024-12-12 19:40:30.439997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.735 [2024-12-12 19:40:30.440093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.735 [2024-12-12 19:40:30.440127] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:47.735 [2024-12-12 19:40:30.440142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.735 pt3 00:11:47.735 [2024-12-12 19:40:30.440824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.735 [2024-12-12 19:40:30.440865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.735 [2024-12-12 19:40:30.441073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.735 [2024-12-12 19:40:30.441113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.735 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.735 "name": "raid_bdev1", 00:11:47.735 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:47.735 "strip_size_kb": 0, 00:11:47.735 "state": "configuring", 00:11:47.735 "raid_level": "raid1", 00:11:47.735 "superblock": true, 00:11:47.735 "num_base_bdevs": 4, 00:11:47.735 "num_base_bdevs_discovered": 2, 00:11:47.735 "num_base_bdevs_operational": 3, 00:11:47.735 "base_bdevs_list": [ 00:11:47.735 { 00:11:47.735 "name": null, 00:11:47.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.735 "is_configured": false, 00:11:47.735 "data_offset": 2048, 00:11:47.735 "data_size": 63488 00:11:47.735 }, 00:11:47.735 { 00:11:47.735 "name": "pt2", 00:11:47.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.735 "is_configured": true, 00:11:47.735 "data_offset": 2048, 00:11:47.735 "data_size": 63488 00:11:47.735 }, 00:11:47.735 { 00:11:47.736 "name": "pt3", 00:11:47.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.736 "is_configured": true, 00:11:47.736 "data_offset": 2048, 00:11:47.736 "data_size": 63488 00:11:47.736 }, 00:11:47.736 { 00:11:47.736 "name": null, 00:11:47.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.736 "is_configured": false, 00:11:47.736 "data_offset": 2048, 00:11:47.736 "data_size": 63488 00:11:47.736 } 00:11:47.736 ] 00:11:47.736 }' 00:11:47.736 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.736 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.996 [2024-12-12 19:40:30.823379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.996 [2024-12-12 19:40:30.823492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.996 [2024-12-12 19:40:30.823527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:47.996 [2024-12-12 19:40:30.823556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.996 [2024-12-12 19:40:30.824165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.996 [2024-12-12 19:40:30.824205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.996 [2024-12-12 19:40:30.824348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.996 [2024-12-12 19:40:30.824390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.996 [2024-12-12 19:40:30.824595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:47.996 [2024-12-12 19:40:30.824615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.996 [2024-12-12 19:40:30.824933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:47.996 [2024-12-12 19:40:30.825151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:47.996 [2024-12-12 19:40:30.825178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:47.996 [2024-12-12 19:40:30.825353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.996 pt4 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.996 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.256 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.256 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.256 "name": "raid_bdev1", 00:11:48.256 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:48.256 "strip_size_kb": 0, 00:11:48.256 "state": "online", 00:11:48.256 "raid_level": "raid1", 00:11:48.256 "superblock": true, 00:11:48.256 "num_base_bdevs": 4, 00:11:48.256 "num_base_bdevs_discovered": 3, 00:11:48.256 "num_base_bdevs_operational": 3, 00:11:48.256 "base_bdevs_list": [ 00:11:48.256 { 00:11:48.256 "name": null, 00:11:48.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.256 "is_configured": false, 00:11:48.256 "data_offset": 2048, 00:11:48.256 "data_size": 63488 00:11:48.256 }, 00:11:48.256 { 00:11:48.256 "name": "pt2", 00:11:48.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.256 "is_configured": true, 00:11:48.256 "data_offset": 2048, 00:11:48.256 "data_size": 63488 00:11:48.256 }, 00:11:48.256 { 00:11:48.256 "name": "pt3", 00:11:48.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.256 "is_configured": true, 00:11:48.256 "data_offset": 2048, 00:11:48.256 "data_size": 63488 00:11:48.256 }, 00:11:48.256 { 00:11:48.256 "name": "pt4", 00:11:48.256 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.256 "is_configured": true, 00:11:48.256 "data_offset": 2048, 00:11:48.256 "data_size": 63488 00:11:48.256 } 00:11:48.256 ] 00:11:48.256 }' 00:11:48.256 19:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.256 19:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 [2024-12-12 19:40:31.242630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.517 [2024-12-12 19:40:31.242679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.517 [2024-12-12 19:40:31.242808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.517 [2024-12-12 19:40:31.242972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.517 [2024-12-12 19:40:31.242997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 [2024-12-12 19:40:31.318449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.517 [2024-12-12 19:40:31.318574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.517 [2024-12-12 19:40:31.318605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:48.517 [2024-12-12 19:40:31.318623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.517 [2024-12-12 19:40:31.321372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.517 [2024-12-12 19:40:31.321426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.517 [2024-12-12 19:40:31.321554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.517 [2024-12-12 19:40:31.321620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.517 [2024-12-12 19:40:31.321831] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:48.517 [2024-12-12 19:40:31.321859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.517 [2024-12-12 19:40:31.321881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:48.517 [2024-12-12 19:40:31.321982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.517 [2024-12-12 19:40:31.322138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.517 pt1 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.777 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.777 "name": "raid_bdev1", 00:11:48.777 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:48.777 "strip_size_kb": 0, 00:11:48.777 "state": "configuring", 00:11:48.777 "raid_level": "raid1", 00:11:48.777 "superblock": true, 00:11:48.777 "num_base_bdevs": 4, 00:11:48.777 "num_base_bdevs_discovered": 2, 00:11:48.777 "num_base_bdevs_operational": 3, 00:11:48.777 "base_bdevs_list": [ 00:11:48.777 { 00:11:48.777 "name": null, 00:11:48.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.777 "is_configured": false, 00:11:48.777 "data_offset": 2048, 00:11:48.777 "data_size": 63488 00:11:48.777 }, 00:11:48.777 { 00:11:48.777 "name": "pt2", 00:11:48.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.777 "is_configured": true, 00:11:48.777 "data_offset": 2048, 00:11:48.777 "data_size": 63488 00:11:48.777 }, 00:11:48.777 { 00:11:48.777 "name": "pt3", 00:11:48.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.777 "is_configured": true, 00:11:48.777 "data_offset": 2048, 00:11:48.777 "data_size": 63488 00:11:48.777 }, 00:11:48.777 { 00:11:48.777 "name": null, 00:11:48.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.777 "is_configured": false, 00:11:48.777 "data_offset": 2048, 00:11:48.777 "data_size": 63488 00:11:48.777 } 00:11:48.777 ] 00:11:48.777 }' 00:11:48.777 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.777 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 [2024-12-12 19:40:31.773760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.038 [2024-12-12 19:40:31.773868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.038 [2024-12-12 19:40:31.773899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:49.038 [2024-12-12 19:40:31.773912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.038 [2024-12-12 19:40:31.774508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.038 [2024-12-12 19:40:31.774560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.038 [2024-12-12 19:40:31.774698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.038 [2024-12-12 19:40:31.774739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.038 [2024-12-12 19:40:31.774923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:49.038 [2024-12-12 19:40:31.774942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.038 [2024-12-12 19:40:31.775273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:49.038 [2024-12-12 19:40:31.775475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:49.038 [2024-12-12 19:40:31.775500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:49.038 [2024-12-12 19:40:31.775710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.038 pt4 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.038 "name": "raid_bdev1", 00:11:49.038 "uuid": "afa2bdb3-dd45-464a-875e-ded6880c2550", 00:11:49.038 "strip_size_kb": 0, 00:11:49.038 "state": "online", 00:11:49.038 "raid_level": "raid1", 00:11:49.038 "superblock": true, 00:11:49.038 "num_base_bdevs": 4, 00:11:49.038 "num_base_bdevs_discovered": 3, 00:11:49.038 "num_base_bdevs_operational": 3, 00:11:49.038 "base_bdevs_list": [ 00:11:49.038 { 00:11:49.038 "name": null, 00:11:49.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.038 "is_configured": false, 00:11:49.038 "data_offset": 2048, 00:11:49.038 "data_size": 63488 00:11:49.038 }, 00:11:49.038 { 00:11:49.038 "name": "pt2", 00:11:49.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.038 "is_configured": true, 00:11:49.038 "data_offset": 2048, 00:11:49.038 "data_size": 63488 00:11:49.038 }, 00:11:49.038 { 00:11:49.038 "name": "pt3", 00:11:49.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.038 "is_configured": true, 00:11:49.038 "data_offset": 2048, 00:11:49.038 "data_size": 63488 00:11:49.038 }, 00:11:49.038 { 00:11:49.038 "name": "pt4", 00:11:49.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.038 "is_configured": true, 00:11:49.038 "data_offset": 2048, 00:11:49.038 "data_size": 63488 00:11:49.038 } 00:11:49.038 ] 00:11:49.038 }' 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.038 19:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:49.608 [2024-12-12 19:40:32.277253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' afa2bdb3-dd45-464a-875e-ded6880c2550 '!=' afa2bdb3-dd45-464a-875e-ded6880c2550 ']' 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76224 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76224 ']' 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76224 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76224 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.608 killing process with pid 76224 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76224' 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76224 00:11:49.608 [2024-12-12 19:40:32.366857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.608 19:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76224 00:11:49.608 [2024-12-12 19:40:32.367047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.608 [2024-12-12 19:40:32.367164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.608 [2024-12-12 19:40:32.367192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:50.177 [2024-12-12 19:40:32.795969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.559 19:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.559 00:11:51.559 real 0m8.143s 00:11:51.559 user 0m12.504s 00:11:51.559 sys 0m1.560s 00:11:51.559 19:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.559 19:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.559 ************************************ 00:11:51.559 END TEST raid_superblock_test 00:11:51.559 ************************************ 00:11:51.559 19:40:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:51.559 19:40:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:51.559 19:40:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.559 19:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.559 ************************************ 00:11:51.559 START TEST raid_read_error_test 00:11:51.559 ************************************ 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bLxB75Xq84 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76711 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76711 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76711 ']' 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.559 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.559 [2024-12-12 19:40:34.182373] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:51.559 [2024-12-12 19:40:34.182524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76711 ] 00:11:51.559 [2024-12-12 19:40:34.360362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.819 [2024-12-12 19:40:34.497629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.078 [2024-12-12 19:40:34.732451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.078 [2024-12-12 19:40:34.732537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.341 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.341 19:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:52.341 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.341 19:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 BaseBdev1_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 true 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 [2024-12-12 19:40:35.072199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.341 [2024-12-12 19:40:35.072287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.341 [2024-12-12 19:40:35.072313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.341 [2024-12-12 19:40:35.072328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.341 [2024-12-12 19:40:35.074861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.341 [2024-12-12 19:40:35.074928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.341 BaseBdev1 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 BaseBdev2_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 true 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 [2024-12-12 19:40:35.147960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.341 [2024-12-12 19:40:35.148043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.341 [2024-12-12 19:40:35.148065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.341 [2024-12-12 19:40:35.148079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.341 [2024-12-12 19:40:35.150592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.341 [2024-12-12 19:40:35.150658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.341 BaseBdev2 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.341 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 BaseBdev3_malloc 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 true 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 [2024-12-12 19:40:35.236393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.602 [2024-12-12 19:40:35.236470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.602 [2024-12-12 19:40:35.236493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:52.602 [2024-12-12 19:40:35.236507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.602 [2024-12-12 19:40:35.238991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.602 [2024-12-12 19:40:35.239040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.602 BaseBdev3 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 BaseBdev4_malloc 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 true 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 [2024-12-12 19:40:35.306909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:52.602 [2024-12-12 19:40:35.306984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.602 [2024-12-12 19:40:35.307005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:52.602 [2024-12-12 19:40:35.307020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.602 [2024-12-12 19:40:35.309427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.602 [2024-12-12 19:40:35.309473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:52.602 BaseBdev4 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 [2024-12-12 19:40:35.318944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.602 [2024-12-12 19:40:35.321055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.602 [2024-12-12 19:40:35.321157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.602 [2024-12-12 19:40:35.321228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.602 [2024-12-12 19:40:35.321579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:52.602 [2024-12-12 19:40:35.321612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.602 [2024-12-12 19:40:35.321913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:52.602 [2024-12-12 19:40:35.322147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:52.602 [2024-12-12 19:40:35.322170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:52.602 [2024-12-12 19:40:35.322375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.602 "name": "raid_bdev1", 00:11:52.602 "uuid": "d54e08ad-5891-44f3-89dc-03708c857879", 00:11:52.602 "strip_size_kb": 0, 00:11:52.602 "state": "online", 00:11:52.602 "raid_level": "raid1", 00:11:52.602 "superblock": true, 00:11:52.602 "num_base_bdevs": 4, 00:11:52.602 "num_base_bdevs_discovered": 4, 00:11:52.602 "num_base_bdevs_operational": 4, 00:11:52.602 "base_bdevs_list": [ 00:11:52.602 { 00:11:52.602 "name": "BaseBdev1", 00:11:52.602 "uuid": "443a87ad-4b22-55c9-b837-1f35d7732672", 00:11:52.602 "is_configured": true, 00:11:52.602 "data_offset": 2048, 00:11:52.602 "data_size": 63488 00:11:52.602 }, 00:11:52.602 { 00:11:52.602 "name": "BaseBdev2", 00:11:52.602 "uuid": "a4fa6238-3b78-5236-a37b-1f4866a24066", 00:11:52.602 "is_configured": true, 00:11:52.602 "data_offset": 2048, 00:11:52.602 "data_size": 63488 00:11:52.602 }, 00:11:52.602 { 00:11:52.602 "name": "BaseBdev3", 00:11:52.602 "uuid": "340cf875-0c8b-578d-b543-00cefaf86cba", 00:11:52.602 "is_configured": true, 00:11:52.602 "data_offset": 2048, 00:11:52.602 "data_size": 63488 00:11:52.602 }, 00:11:52.602 { 00:11:52.602 "name": "BaseBdev4", 00:11:52.602 "uuid": "411ad15c-2415-5ba6-90f9-2bbb4ec23c5d", 00:11:52.602 "is_configured": true, 00:11:52.602 "data_offset": 2048, 00:11:52.602 "data_size": 63488 00:11:52.602 } 00:11:52.602 ] 00:11:52.602 }' 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.602 19:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.172 19:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.172 [2024-12-12 19:40:35.843679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.110 "name": "raid_bdev1", 00:11:54.110 "uuid": "d54e08ad-5891-44f3-89dc-03708c857879", 00:11:54.110 "strip_size_kb": 0, 00:11:54.110 "state": "online", 00:11:54.110 "raid_level": "raid1", 00:11:54.110 "superblock": true, 00:11:54.110 "num_base_bdevs": 4, 00:11:54.110 "num_base_bdevs_discovered": 4, 00:11:54.110 "num_base_bdevs_operational": 4, 00:11:54.110 "base_bdevs_list": [ 00:11:54.110 { 00:11:54.110 "name": "BaseBdev1", 00:11:54.110 "uuid": "443a87ad-4b22-55c9-b837-1f35d7732672", 00:11:54.110 "is_configured": true, 00:11:54.110 "data_offset": 2048, 00:11:54.110 "data_size": 63488 00:11:54.110 }, 00:11:54.110 { 00:11:54.110 "name": "BaseBdev2", 00:11:54.110 "uuid": "a4fa6238-3b78-5236-a37b-1f4866a24066", 00:11:54.110 "is_configured": true, 00:11:54.110 "data_offset": 2048, 00:11:54.110 "data_size": 63488 00:11:54.110 }, 00:11:54.110 { 00:11:54.110 "name": "BaseBdev3", 00:11:54.110 "uuid": "340cf875-0c8b-578d-b543-00cefaf86cba", 00:11:54.110 "is_configured": true, 00:11:54.110 "data_offset": 2048, 00:11:54.110 "data_size": 63488 00:11:54.110 }, 00:11:54.110 { 00:11:54.110 "name": "BaseBdev4", 00:11:54.110 "uuid": "411ad15c-2415-5ba6-90f9-2bbb4ec23c5d", 00:11:54.110 "is_configured": true, 00:11:54.110 "data_offset": 2048, 00:11:54.110 "data_size": 63488 00:11:54.110 } 00:11:54.110 ] 00:11:54.110 }' 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.110 19:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.370 19:40:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.370 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.370 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.370 [2024-12-12 19:40:37.209229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.370 [2024-12-12 19:40:37.209286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.370 [2024-12-12 19:40:37.212251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.370 [2024-12-12 19:40:37.212341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.370 [2024-12-12 19:40:37.212494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.370 [2024-12-12 19:40:37.212515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:54.629 { 00:11:54.629 "results": [ 00:11:54.629 { 00:11:54.629 "job": "raid_bdev1", 00:11:54.629 "core_mask": "0x1", 00:11:54.629 "workload": "randrw", 00:11:54.629 "percentage": 50, 00:11:54.629 "status": "finished", 00:11:54.629 "queue_depth": 1, 00:11:54.629 "io_size": 131072, 00:11:54.629 "runtime": 1.365939, 00:11:54.629 "iops": 7570.616257387775, 00:11:54.629 "mibps": 946.3270321734718, 00:11:54.629 "io_failed": 0, 00:11:54.629 "io_timeout": 0, 00:11:54.629 "avg_latency_us": 129.09270487722378, 00:11:54.629 "min_latency_us": 26.1589519650655, 00:11:54.629 "max_latency_us": 1681.3275109170306 00:11:54.629 } 00:11:54.629 ], 00:11:54.629 "core_count": 1 00:11:54.629 } 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76711 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76711 ']' 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76711 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76711 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76711' 00:11:54.629 killing process with pid 76711 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76711 00:11:54.629 [2024-12-12 19:40:37.255008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.629 19:40:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76711 00:11:54.889 [2024-12-12 19:40:37.615864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bLxB75Xq84 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:56.271 00:11:56.271 real 0m4.877s 00:11:56.271 user 0m5.568s 00:11:56.271 sys 0m0.702s 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.271 19:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.271 ************************************ 00:11:56.271 END TEST raid_read_error_test 00:11:56.271 ************************************ 00:11:56.271 19:40:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:56.271 19:40:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.271 19:40:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.271 19:40:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.271 ************************************ 00:11:56.271 START TEST raid_write_error_test 00:11:56.271 ************************************ 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8xY50EAAYY 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76857 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76857 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76857 ']' 00:11:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.271 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.531 [2024-12-12 19:40:39.132468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:56.531 [2024-12-12 19:40:39.132638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76857 ] 00:11:56.531 [2024-12-12 19:40:39.312124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.838 [2024-12-12 19:40:39.451381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.097 [2024-12-12 19:40:39.694186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.097 [2024-12-12 19:40:39.694316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.355 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.355 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.355 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.355 19:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.356 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 BaseBdev1_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 true 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 [2024-12-12 19:40:40.024400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.356 [2024-12-12 19:40:40.024589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.356 [2024-12-12 19:40:40.024637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.356 [2024-12-12 19:40:40.024689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.356 [2024-12-12 19:40:40.027307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.356 [2024-12-12 19:40:40.027420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.356 BaseBdev1 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 BaseBdev2_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 true 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 [2024-12-12 19:40:40.097529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.356 [2024-12-12 19:40:40.097681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.356 [2024-12-12 19:40:40.097719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.356 [2024-12-12 19:40:40.097776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.356 [2024-12-12 19:40:40.100226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.356 [2024-12-12 19:40:40.100322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.356 BaseBdev2 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 BaseBdev3_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 true 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 [2024-12-12 19:40:40.181800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.356 [2024-12-12 19:40:40.181940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.356 [2024-12-12 19:40:40.182000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.356 [2024-12-12 19:40:40.182050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.356 [2024-12-12 19:40:40.184839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.356 [2024-12-12 19:40:40.184934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.356 BaseBdev3 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.356 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 BaseBdev4_malloc 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 true 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 [2024-12-12 19:40:40.250785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:57.614 [2024-12-12 19:40:40.250939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.614 [2024-12-12 19:40:40.250979] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.614 [2024-12-12 19:40:40.251048] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.614 [2024-12-12 19:40:40.253505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.614 [2024-12-12 19:40:40.253618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.614 BaseBdev4 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 [2024-12-12 19:40:40.262830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.614 [2024-12-12 19:40:40.264995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.614 [2024-12-12 19:40:40.265128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.614 [2024-12-12 19:40:40.265238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.614 [2024-12-12 19:40:40.265560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:57.614 [2024-12-12 19:40:40.265623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.614 [2024-12-12 19:40:40.265953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:57.614 [2024-12-12 19:40:40.266201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:57.614 [2024-12-12 19:40:40.266249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:57.614 [2024-12-12 19:40:40.266492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.614 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.615 "name": "raid_bdev1", 00:11:57.615 "uuid": "8739baab-bbfd-4d4c-b407-6e4becd73a83", 00:11:57.615 "strip_size_kb": 0, 00:11:57.615 "state": "online", 00:11:57.615 "raid_level": "raid1", 00:11:57.615 "superblock": true, 00:11:57.615 "num_base_bdevs": 4, 00:11:57.615 "num_base_bdevs_discovered": 4, 00:11:57.615 "num_base_bdevs_operational": 4, 00:11:57.615 "base_bdevs_list": [ 00:11:57.615 { 00:11:57.615 "name": "BaseBdev1", 00:11:57.615 "uuid": "4334aa1e-6a5d-5488-be0f-f3e73d424ad0", 00:11:57.615 "is_configured": true, 00:11:57.615 "data_offset": 2048, 00:11:57.615 "data_size": 63488 00:11:57.615 }, 00:11:57.615 { 00:11:57.615 "name": "BaseBdev2", 00:11:57.615 "uuid": "b078905d-acf9-51da-ba50-6d714986e23f", 00:11:57.615 "is_configured": true, 00:11:57.615 "data_offset": 2048, 00:11:57.615 "data_size": 63488 00:11:57.615 }, 00:11:57.615 { 00:11:57.615 "name": "BaseBdev3", 00:11:57.615 "uuid": "59be01fe-6ad6-5849-afc6-d8fe2310ac85", 00:11:57.615 "is_configured": true, 00:11:57.615 "data_offset": 2048, 00:11:57.615 "data_size": 63488 00:11:57.615 }, 00:11:57.615 { 00:11:57.615 "name": "BaseBdev4", 00:11:57.615 "uuid": "97bfcad4-71c6-5ba6-a560-4bbce28cf32d", 00:11:57.615 "is_configured": true, 00:11:57.615 "data_offset": 2048, 00:11:57.615 "data_size": 63488 00:11:57.615 } 00:11:57.615 ] 00:11:57.615 }' 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.615 19:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.181 19:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.181 [2024-12-12 19:40:40.847490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:59.116 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.116 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.117 [2024-12-12 19:40:41.762742] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:59.117 [2024-12-12 19:40:41.762923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.117 [2024-12-12 19:40:41.763203] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.117 "name": "raid_bdev1", 00:11:59.117 "uuid": "8739baab-bbfd-4d4c-b407-6e4becd73a83", 00:11:59.117 "strip_size_kb": 0, 00:11:59.117 "state": "online", 00:11:59.117 "raid_level": "raid1", 00:11:59.117 "superblock": true, 00:11:59.117 "num_base_bdevs": 4, 00:11:59.117 "num_base_bdevs_discovered": 3, 00:11:59.117 "num_base_bdevs_operational": 3, 00:11:59.117 "base_bdevs_list": [ 00:11:59.117 { 00:11:59.117 "name": null, 00:11:59.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.117 "is_configured": false, 00:11:59.117 "data_offset": 0, 00:11:59.117 "data_size": 63488 00:11:59.117 }, 00:11:59.117 { 00:11:59.117 "name": "BaseBdev2", 00:11:59.117 "uuid": "b078905d-acf9-51da-ba50-6d714986e23f", 00:11:59.117 "is_configured": true, 00:11:59.117 "data_offset": 2048, 00:11:59.117 "data_size": 63488 00:11:59.117 }, 00:11:59.117 { 00:11:59.117 "name": "BaseBdev3", 00:11:59.117 "uuid": "59be01fe-6ad6-5849-afc6-d8fe2310ac85", 00:11:59.117 "is_configured": true, 00:11:59.117 "data_offset": 2048, 00:11:59.117 "data_size": 63488 00:11:59.117 }, 00:11:59.117 { 00:11:59.117 "name": "BaseBdev4", 00:11:59.117 "uuid": "97bfcad4-71c6-5ba6-a560-4bbce28cf32d", 00:11:59.117 "is_configured": true, 00:11:59.117 "data_offset": 2048, 00:11:59.117 "data_size": 63488 00:11:59.117 } 00:11:59.117 ] 00:11:59.117 }' 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.117 19:40:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.686 [2024-12-12 19:40:42.237052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.686 [2024-12-12 19:40:42.237202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.686 [2024-12-12 19:40:42.240074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.686 [2024-12-12 19:40:42.240206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.686 [2024-12-12 19:40:42.240390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.686 [2024-12-12 19:40:42.240452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:59.686 { 00:11:59.686 "results": [ 00:11:59.686 { 00:11:59.686 "job": "raid_bdev1", 00:11:59.686 "core_mask": "0x1", 00:11:59.686 "workload": "randrw", 00:11:59.686 "percentage": 50, 00:11:59.686 "status": "finished", 00:11:59.686 "queue_depth": 1, 00:11:59.686 "io_size": 131072, 00:11:59.686 "runtime": 1.390245, 00:11:59.686 "iops": 8353.203931681106, 00:11:59.686 "mibps": 1044.1504914601383, 00:11:59.686 "io_failed": 0, 00:11:59.686 "io_timeout": 0, 00:11:59.686 "avg_latency_us": 116.72546765652257, 00:11:59.686 "min_latency_us": 25.9353711790393, 00:11:59.686 "max_latency_us": 1473.844541484716 00:11:59.686 } 00:11:59.686 ], 00:11:59.686 "core_count": 1 00:11:59.686 } 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76857 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76857 ']' 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76857 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76857 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76857' 00:11:59.686 killing process with pid 76857 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76857 00:11:59.686 [2024-12-12 19:40:42.286703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.686 19:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76857 00:11:59.945 [2024-12-12 19:40:42.635609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.325 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8xY50EAAYY 00:12:01.325 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.325 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.325 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:01.326 00:12:01.326 real 0m4.903s 00:12:01.326 user 0m5.650s 00:12:01.326 sys 0m0.709s 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.326 19:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 ************************************ 00:12:01.326 END TEST raid_write_error_test 00:12:01.326 ************************************ 00:12:01.326 19:40:43 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:01.326 19:40:43 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:01.326 19:40:43 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:01.326 19:40:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:01.326 19:40:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.326 19:40:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 ************************************ 00:12:01.326 START TEST raid_rebuild_test 00:12:01.326 ************************************ 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77006 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77006 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77006 ']' 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.326 19:40:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.326 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.326 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.326 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 [2024-12-12 19:40:44.082572] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:01.326 [2024-12-12 19:40:44.082755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:01.326 Zero copy mechanism will not be used. 00:12:01.326 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77006 ] 00:12:01.585 [2024-12-12 19:40:44.232975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.585 [2024-12-12 19:40:44.371924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.845 [2024-12-12 19:40:44.612859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.845 [2024-12-12 19:40:44.613056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 BaseBdev1_malloc 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-12-12 19:40:45.014987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:02.417 [2024-12-12 19:40:45.015139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.417 [2024-12-12 19:40:45.015446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:02.417 [2024-12-12 19:40:45.015503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.417 [2024-12-12 19:40:45.018049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.417 [2024-12-12 19:40:45.018166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:02.417 BaseBdev1 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 BaseBdev2_malloc 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-12-12 19:40:45.075050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:02.417 [2024-12-12 19:40:45.075122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.417 [2024-12-12 19:40:45.075144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:02.417 [2024-12-12 19:40:45.075160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.417 [2024-12-12 19:40:45.077617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.417 [2024-12-12 19:40:45.077655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:02.417 BaseBdev2 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 spare_malloc 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 spare_delay 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-12-12 19:40:45.164283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:02.417 [2024-12-12 19:40:45.164364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.417 [2024-12-12 19:40:45.164387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:02.417 [2024-12-12 19:40:45.164401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.417 [2024-12-12 19:40:45.166891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.417 [2024-12-12 19:40:45.166939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:02.417 spare 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-12-12 19:40:45.176329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.417 [2024-12-12 19:40:45.178526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.417 [2024-12-12 19:40:45.178721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:02.417 [2024-12-12 19:40:45.178789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.417 [2024-12-12 19:40:45.179097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:02.417 [2024-12-12 19:40:45.179328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:02.417 [2024-12-12 19:40:45.179379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:02.417 [2024-12-12 19:40:45.179649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.417 "name": "raid_bdev1", 00:12:02.417 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:02.417 "strip_size_kb": 0, 00:12:02.417 "state": "online", 00:12:02.418 "raid_level": "raid1", 00:12:02.418 "superblock": false, 00:12:02.418 "num_base_bdevs": 2, 00:12:02.418 "num_base_bdevs_discovered": 2, 00:12:02.418 "num_base_bdevs_operational": 2, 00:12:02.418 "base_bdevs_list": [ 00:12:02.418 { 00:12:02.418 "name": "BaseBdev1", 00:12:02.418 "uuid": "e88cd49e-83bc-504a-9ecc-d5681372eb40", 00:12:02.418 "is_configured": true, 00:12:02.418 "data_offset": 0, 00:12:02.418 "data_size": 65536 00:12:02.418 }, 00:12:02.418 { 00:12:02.418 "name": "BaseBdev2", 00:12:02.418 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:02.418 "is_configured": true, 00:12:02.418 "data_offset": 0, 00:12:02.418 "data_size": 65536 00:12:02.418 } 00:12:02.418 ] 00:12:02.418 }' 00:12:02.418 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.418 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.987 [2024-12-12 19:40:45.639893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.987 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:03.247 [2024-12-12 19:40:45.895211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:03.247 /dev/nbd0 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.247 1+0 records in 00:12:03.247 1+0 records out 00:12:03.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511505 s, 8.0 MB/s 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:03.247 19:40:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:08.520 65536+0 records in 00:12:08.520 65536+0 records out 00:12:08.520 33554432 bytes (34 MB, 32 MiB) copied, 4.53332 s, 7.4 MB/s 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:08.520 [2024-12-12 19:40:50.712604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.520 [2024-12-12 19:40:50.720731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.520 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.520 "name": "raid_bdev1", 00:12:08.520 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:08.520 "strip_size_kb": 0, 00:12:08.520 "state": "online", 00:12:08.520 "raid_level": "raid1", 00:12:08.520 "superblock": false, 00:12:08.520 "num_base_bdevs": 2, 00:12:08.520 "num_base_bdevs_discovered": 1, 00:12:08.520 "num_base_bdevs_operational": 1, 00:12:08.520 "base_bdevs_list": [ 00:12:08.520 { 00:12:08.520 "name": null, 00:12:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.521 "is_configured": false, 00:12:08.521 "data_offset": 0, 00:12:08.521 "data_size": 65536 00:12:08.521 }, 00:12:08.521 { 00:12:08.521 "name": "BaseBdev2", 00:12:08.521 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:08.521 "is_configured": true, 00:12:08.521 "data_offset": 0, 00:12:08.521 "data_size": 65536 00:12:08.521 } 00:12:08.521 ] 00:12:08.521 }' 00:12:08.521 19:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.521 19:40:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.521 19:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:08.521 19:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.521 19:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.521 [2024-12-12 19:40:51.175992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.521 [2024-12-12 19:40:51.193167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:08.521 19:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.521 19:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:08.521 [2024-12-12 19:40:51.195445] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.459 "name": "raid_bdev1", 00:12:09.459 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:09.459 "strip_size_kb": 0, 00:12:09.459 "state": "online", 00:12:09.459 "raid_level": "raid1", 00:12:09.459 "superblock": false, 00:12:09.459 "num_base_bdevs": 2, 00:12:09.459 "num_base_bdevs_discovered": 2, 00:12:09.459 "num_base_bdevs_operational": 2, 00:12:09.459 "process": { 00:12:09.459 "type": "rebuild", 00:12:09.459 "target": "spare", 00:12:09.459 "progress": { 00:12:09.459 "blocks": 20480, 00:12:09.459 "percent": 31 00:12:09.459 } 00:12:09.459 }, 00:12:09.459 "base_bdevs_list": [ 00:12:09.459 { 00:12:09.459 "name": "spare", 00:12:09.459 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:09.459 "is_configured": true, 00:12:09.459 "data_offset": 0, 00:12:09.459 "data_size": 65536 00:12:09.459 }, 00:12:09.459 { 00:12:09.459 "name": "BaseBdev2", 00:12:09.459 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:09.459 "is_configured": true, 00:12:09.459 "data_offset": 0, 00:12:09.459 "data_size": 65536 00:12:09.459 } 00:12:09.459 ] 00:12:09.459 }' 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.459 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.719 [2024-12-12 19:40:52.330392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.719 [2024-12-12 19:40:52.406323] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:09.719 [2024-12-12 19:40:52.406586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.719 [2024-12-12 19:40:52.406643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.719 [2024-12-12 19:40:52.406675] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.719 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.720 "name": "raid_bdev1", 00:12:09.720 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:09.720 "strip_size_kb": 0, 00:12:09.720 "state": "online", 00:12:09.720 "raid_level": "raid1", 00:12:09.720 "superblock": false, 00:12:09.720 "num_base_bdevs": 2, 00:12:09.720 "num_base_bdevs_discovered": 1, 00:12:09.720 "num_base_bdevs_operational": 1, 00:12:09.720 "base_bdevs_list": [ 00:12:09.720 { 00:12:09.720 "name": null, 00:12:09.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.720 "is_configured": false, 00:12:09.720 "data_offset": 0, 00:12:09.720 "data_size": 65536 00:12:09.720 }, 00:12:09.720 { 00:12:09.720 "name": "BaseBdev2", 00:12:09.720 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:09.720 "is_configured": true, 00:12:09.720 "data_offset": 0, 00:12:09.720 "data_size": 65536 00:12:09.720 } 00:12:09.720 ] 00:12:09.720 }' 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.720 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.288 "name": "raid_bdev1", 00:12:10.288 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:10.288 "strip_size_kb": 0, 00:12:10.288 "state": "online", 00:12:10.288 "raid_level": "raid1", 00:12:10.288 "superblock": false, 00:12:10.288 "num_base_bdevs": 2, 00:12:10.288 "num_base_bdevs_discovered": 1, 00:12:10.288 "num_base_bdevs_operational": 1, 00:12:10.288 "base_bdevs_list": [ 00:12:10.288 { 00:12:10.288 "name": null, 00:12:10.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.288 "is_configured": false, 00:12:10.288 "data_offset": 0, 00:12:10.288 "data_size": 65536 00:12:10.288 }, 00:12:10.288 { 00:12:10.288 "name": "BaseBdev2", 00:12:10.288 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:10.288 "is_configured": true, 00:12:10.288 "data_offset": 0, 00:12:10.288 "data_size": 65536 00:12:10.288 } 00:12:10.288 ] 00:12:10.288 }' 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.288 19:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.288 [2024-12-12 19:40:53.040420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.288 [2024-12-12 19:40:53.059409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.288 19:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:10.288 [2024-12-12 19:40:53.061841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.486 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.486 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.486 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.486 "name": "raid_bdev1", 00:12:11.486 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:11.486 "strip_size_kb": 0, 00:12:11.486 "state": "online", 00:12:11.486 "raid_level": "raid1", 00:12:11.486 "superblock": false, 00:12:11.486 "num_base_bdevs": 2, 00:12:11.486 "num_base_bdevs_discovered": 2, 00:12:11.486 "num_base_bdevs_operational": 2, 00:12:11.486 "process": { 00:12:11.486 "type": "rebuild", 00:12:11.486 "target": "spare", 00:12:11.487 "progress": { 00:12:11.487 "blocks": 20480, 00:12:11.487 "percent": 31 00:12:11.487 } 00:12:11.487 }, 00:12:11.487 "base_bdevs_list": [ 00:12:11.487 { 00:12:11.487 "name": "spare", 00:12:11.487 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:11.487 "is_configured": true, 00:12:11.487 "data_offset": 0, 00:12:11.487 "data_size": 65536 00:12:11.487 }, 00:12:11.487 { 00:12:11.487 "name": "BaseBdev2", 00:12:11.487 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:11.487 "is_configured": true, 00:12:11.487 "data_offset": 0, 00:12:11.487 "data_size": 65536 00:12:11.487 } 00:12:11.487 ] 00:12:11.487 }' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.487 "name": "raid_bdev1", 00:12:11.487 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:11.487 "strip_size_kb": 0, 00:12:11.487 "state": "online", 00:12:11.487 "raid_level": "raid1", 00:12:11.487 "superblock": false, 00:12:11.487 "num_base_bdevs": 2, 00:12:11.487 "num_base_bdevs_discovered": 2, 00:12:11.487 "num_base_bdevs_operational": 2, 00:12:11.487 "process": { 00:12:11.487 "type": "rebuild", 00:12:11.487 "target": "spare", 00:12:11.487 "progress": { 00:12:11.487 "blocks": 22528, 00:12:11.487 "percent": 34 00:12:11.487 } 00:12:11.487 }, 00:12:11.487 "base_bdevs_list": [ 00:12:11.487 { 00:12:11.487 "name": "spare", 00:12:11.487 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:11.487 "is_configured": true, 00:12:11.487 "data_offset": 0, 00:12:11.487 "data_size": 65536 00:12:11.487 }, 00:12:11.487 { 00:12:11.487 "name": "BaseBdev2", 00:12:11.487 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:11.487 "is_configured": true, 00:12:11.487 "data_offset": 0, 00:12:11.487 "data_size": 65536 00:12:11.487 } 00:12:11.487 ] 00:12:11.487 }' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.487 19:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.865 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.866 "name": "raid_bdev1", 00:12:12.866 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:12.866 "strip_size_kb": 0, 00:12:12.866 "state": "online", 00:12:12.866 "raid_level": "raid1", 00:12:12.866 "superblock": false, 00:12:12.866 "num_base_bdevs": 2, 00:12:12.866 "num_base_bdevs_discovered": 2, 00:12:12.866 "num_base_bdevs_operational": 2, 00:12:12.866 "process": { 00:12:12.866 "type": "rebuild", 00:12:12.866 "target": "spare", 00:12:12.866 "progress": { 00:12:12.866 "blocks": 45056, 00:12:12.866 "percent": 68 00:12:12.866 } 00:12:12.866 }, 00:12:12.866 "base_bdevs_list": [ 00:12:12.866 { 00:12:12.866 "name": "spare", 00:12:12.866 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:12.866 "is_configured": true, 00:12:12.866 "data_offset": 0, 00:12:12.866 "data_size": 65536 00:12:12.866 }, 00:12:12.866 { 00:12:12.866 "name": "BaseBdev2", 00:12:12.866 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:12.866 "is_configured": true, 00:12:12.866 "data_offset": 0, 00:12:12.866 "data_size": 65536 00:12:12.866 } 00:12:12.866 ] 00:12:12.866 }' 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.866 19:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.805 [2024-12-12 19:40:56.289892] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:13.805 [2024-12-12 19:40:56.290144] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:13.805 [2024-12-12 19:40:56.290236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.805 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.805 "name": "raid_bdev1", 00:12:13.805 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:13.805 "strip_size_kb": 0, 00:12:13.805 "state": "online", 00:12:13.805 "raid_level": "raid1", 00:12:13.805 "superblock": false, 00:12:13.805 "num_base_bdevs": 2, 00:12:13.805 "num_base_bdevs_discovered": 2, 00:12:13.806 "num_base_bdevs_operational": 2, 00:12:13.806 "base_bdevs_list": [ 00:12:13.806 { 00:12:13.806 "name": "spare", 00:12:13.806 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:13.806 "is_configured": true, 00:12:13.806 "data_offset": 0, 00:12:13.806 "data_size": 65536 00:12:13.806 }, 00:12:13.806 { 00:12:13.806 "name": "BaseBdev2", 00:12:13.806 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:13.806 "is_configured": true, 00:12:13.806 "data_offset": 0, 00:12:13.806 "data_size": 65536 00:12:13.806 } 00:12:13.806 ] 00:12:13.806 }' 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.066 "name": "raid_bdev1", 00:12:14.066 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:14.066 "strip_size_kb": 0, 00:12:14.066 "state": "online", 00:12:14.066 "raid_level": "raid1", 00:12:14.066 "superblock": false, 00:12:14.066 "num_base_bdevs": 2, 00:12:14.066 "num_base_bdevs_discovered": 2, 00:12:14.066 "num_base_bdevs_operational": 2, 00:12:14.066 "base_bdevs_list": [ 00:12:14.066 { 00:12:14.066 "name": "spare", 00:12:14.066 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:14.066 "is_configured": true, 00:12:14.066 "data_offset": 0, 00:12:14.066 "data_size": 65536 00:12:14.066 }, 00:12:14.066 { 00:12:14.066 "name": "BaseBdev2", 00:12:14.066 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:14.066 "is_configured": true, 00:12:14.066 "data_offset": 0, 00:12:14.066 "data_size": 65536 00:12:14.066 } 00:12:14.066 ] 00:12:14.066 }' 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.066 "name": "raid_bdev1", 00:12:14.066 "uuid": "89be55bc-3f47-4991-b976-8fb77f9e59e6", 00:12:14.066 "strip_size_kb": 0, 00:12:14.066 "state": "online", 00:12:14.066 "raid_level": "raid1", 00:12:14.066 "superblock": false, 00:12:14.066 "num_base_bdevs": 2, 00:12:14.066 "num_base_bdevs_discovered": 2, 00:12:14.066 "num_base_bdevs_operational": 2, 00:12:14.066 "base_bdevs_list": [ 00:12:14.066 { 00:12:14.066 "name": "spare", 00:12:14.066 "uuid": "63e42f59-88f0-5841-9817-058179362924", 00:12:14.066 "is_configured": true, 00:12:14.066 "data_offset": 0, 00:12:14.066 "data_size": 65536 00:12:14.066 }, 00:12:14.066 { 00:12:14.066 "name": "BaseBdev2", 00:12:14.066 "uuid": "6792b26b-d47c-53cd-8792-31f6ce379481", 00:12:14.066 "is_configured": true, 00:12:14.066 "data_offset": 0, 00:12:14.066 "data_size": 65536 00:12:14.066 } 00:12:14.066 ] 00:12:14.066 }' 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.066 19:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.636 [2024-12-12 19:40:57.190059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.636 [2024-12-12 19:40:57.190198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.636 [2024-12-12 19:40:57.190371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.636 [2024-12-12 19:40:57.190506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.636 [2024-12-12 19:40:57.190588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:14.636 /dev/nbd0 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.636 1+0 records in 00:12:14.636 1+0 records out 00:12:14.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422761 s, 9.7 MB/s 00:12:14.636 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:14.897 /dev/nbd1 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.897 1+0 records in 00:12:14.897 1+0 records out 00:12:14.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496604 s, 8.2 MB/s 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.897 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.157 19:40:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.416 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77006 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77006 ']' 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77006 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77006 00:12:15.675 killing process with pid 77006 00:12:15.675 Received shutdown signal, test time was about 60.000000 seconds 00:12:15.675 00:12:15.675 Latency(us) 00:12:15.675 [2024-12-12T19:40:58.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.675 [2024-12-12T19:40:58.520Z] =================================================================================================================== 00:12:15.675 [2024-12-12T19:40:58.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77006' 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77006 00:12:15.675 [2024-12-12 19:40:58.388454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.675 19:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77006 00:12:15.934 [2024-12-12 19:40:58.694559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:17.311 00:12:17.311 real 0m15.862s 00:12:17.311 user 0m17.564s 00:12:17.311 sys 0m3.419s 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 ************************************ 00:12:17.311 END TEST raid_rebuild_test 00:12:17.311 ************************************ 00:12:17.311 19:40:59 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:17.311 19:40:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:17.311 19:40:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.311 19:40:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 ************************************ 00:12:17.311 START TEST raid_rebuild_test_sb 00:12:17.311 ************************************ 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.311 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77425 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77425 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77425 ']' 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.312 19:40:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.312 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.312 Zero copy mechanism will not be used. 00:12:17.312 [2024-12-12 19:41:00.011106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:17.312 [2024-12-12 19:41:00.011226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77425 ] 00:12:17.570 [2024-12-12 19:41:00.188594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.570 [2024-12-12 19:41:00.306636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.828 [2024-12-12 19:41:00.512052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.828 [2024-12-12 19:41:00.512088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.086 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 BaseBdev1_malloc 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 [2024-12-12 19:41:00.900333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.087 [2024-12-12 19:41:00.900453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.087 [2024-12-12 19:41:00.900497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.087 [2024-12-12 19:41:00.900529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.087 [2024-12-12 19:41:00.902674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.087 [2024-12-12 19:41:00.902763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.087 BaseBdev1 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 BaseBdev2_malloc 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 [2024-12-12 19:41:00.955642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.346 [2024-12-12 19:41:00.955704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.346 [2024-12-12 19:41:00.955722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.346 [2024-12-12 19:41:00.955733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.346 [2024-12-12 19:41:00.957722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.346 [2024-12-12 19:41:00.957760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.346 BaseBdev2 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.346 19:41:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 spare_malloc 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 spare_delay 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 [2024-12-12 19:41:01.036246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.346 [2024-12-12 19:41:01.036364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.346 [2024-12-12 19:41:01.036403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:18.346 [2024-12-12 19:41:01.036436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.346 [2024-12-12 19:41:01.038582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.346 [2024-12-12 19:41:01.038656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.346 spare 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 [2024-12-12 19:41:01.048286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.346 [2024-12-12 19:41:01.050113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.346 [2024-12-12 19:41:01.050315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.346 [2024-12-12 19:41:01.050333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.346 [2024-12-12 19:41:01.050609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:18.346 [2024-12-12 19:41:01.050781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.346 [2024-12-12 19:41:01.050792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.346 [2024-12-12 19:41:01.050950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.346 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.347 "name": "raid_bdev1", 00:12:18.347 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:18.347 "strip_size_kb": 0, 00:12:18.347 "state": "online", 00:12:18.347 "raid_level": "raid1", 00:12:18.347 "superblock": true, 00:12:18.347 "num_base_bdevs": 2, 00:12:18.347 "num_base_bdevs_discovered": 2, 00:12:18.347 "num_base_bdevs_operational": 2, 00:12:18.347 "base_bdevs_list": [ 00:12:18.347 { 00:12:18.347 "name": "BaseBdev1", 00:12:18.347 "uuid": "14dd828c-1711-5e8e-a3a3-b86d9b71d952", 00:12:18.347 "is_configured": true, 00:12:18.347 "data_offset": 2048, 00:12:18.347 "data_size": 63488 00:12:18.347 }, 00:12:18.347 { 00:12:18.347 "name": "BaseBdev2", 00:12:18.347 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:18.347 "is_configured": true, 00:12:18.347 "data_offset": 2048, 00:12:18.347 "data_size": 63488 00:12:18.347 } 00:12:18.347 ] 00:12:18.347 }' 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.347 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.915 [2024-12-12 19:41:01.491838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.915 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.174 [2024-12-12 19:41:01.771102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.174 /dev/nbd0 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.174 1+0 records in 00:12:19.174 1+0 records out 00:12:19.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370971 s, 11.0 MB/s 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.174 19:41:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:23.363 63488+0 records in 00:12:23.363 63488+0 records out 00:12:23.363 32505856 bytes (33 MB, 31 MiB) copied, 3.9457 s, 8.2 MB/s 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.363 19:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.363 [2024-12-12 19:41:06.007521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.363 [2024-12-12 19:41:06.019604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.363 "name": "raid_bdev1", 00:12:23.363 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:23.363 "strip_size_kb": 0, 00:12:23.363 "state": "online", 00:12:23.363 "raid_level": "raid1", 00:12:23.363 "superblock": true, 00:12:23.363 "num_base_bdevs": 2, 00:12:23.363 "num_base_bdevs_discovered": 1, 00:12:23.363 "num_base_bdevs_operational": 1, 00:12:23.363 "base_bdevs_list": [ 00:12:23.363 { 00:12:23.363 "name": null, 00:12:23.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.363 "is_configured": false, 00:12:23.363 "data_offset": 0, 00:12:23.363 "data_size": 63488 00:12:23.363 }, 00:12:23.363 { 00:12:23.363 "name": "BaseBdev2", 00:12:23.363 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:23.363 "is_configured": true, 00:12:23.363 "data_offset": 2048, 00:12:23.363 "data_size": 63488 00:12:23.363 } 00:12:23.363 ] 00:12:23.363 }' 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.363 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.622 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.622 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 [2024-12-12 19:41:06.434919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.622 [2024-12-12 19:41:06.452167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:23.622 19:41:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.622 19:41:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:23.622 [2024-12-12 19:41:06.454007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.002 "name": "raid_bdev1", 00:12:25.002 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:25.002 "strip_size_kb": 0, 00:12:25.002 "state": "online", 00:12:25.002 "raid_level": "raid1", 00:12:25.002 "superblock": true, 00:12:25.002 "num_base_bdevs": 2, 00:12:25.002 "num_base_bdevs_discovered": 2, 00:12:25.002 "num_base_bdevs_operational": 2, 00:12:25.002 "process": { 00:12:25.002 "type": "rebuild", 00:12:25.002 "target": "spare", 00:12:25.002 "progress": { 00:12:25.002 "blocks": 20480, 00:12:25.002 "percent": 32 00:12:25.002 } 00:12:25.002 }, 00:12:25.002 "base_bdevs_list": [ 00:12:25.002 { 00:12:25.002 "name": "spare", 00:12:25.002 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:25.002 "is_configured": true, 00:12:25.002 "data_offset": 2048, 00:12:25.002 "data_size": 63488 00:12:25.002 }, 00:12:25.002 { 00:12:25.002 "name": "BaseBdev2", 00:12:25.002 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:25.002 "is_configured": true, 00:12:25.002 "data_offset": 2048, 00:12:25.002 "data_size": 63488 00:12:25.002 } 00:12:25.002 ] 00:12:25.002 }' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.002 [2024-12-12 19:41:07.597740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.002 [2024-12-12 19:41:07.659302] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.002 [2024-12-12 19:41:07.659389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.002 [2024-12-12 19:41:07.659404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.002 [2024-12-12 19:41:07.659413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.002 "name": "raid_bdev1", 00:12:25.002 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:25.002 "strip_size_kb": 0, 00:12:25.002 "state": "online", 00:12:25.002 "raid_level": "raid1", 00:12:25.002 "superblock": true, 00:12:25.002 "num_base_bdevs": 2, 00:12:25.002 "num_base_bdevs_discovered": 1, 00:12:25.002 "num_base_bdevs_operational": 1, 00:12:25.002 "base_bdevs_list": [ 00:12:25.002 { 00:12:25.002 "name": null, 00:12:25.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.002 "is_configured": false, 00:12:25.002 "data_offset": 0, 00:12:25.002 "data_size": 63488 00:12:25.002 }, 00:12:25.002 { 00:12:25.002 "name": "BaseBdev2", 00:12:25.002 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:25.002 "is_configured": true, 00:12:25.002 "data_offset": 2048, 00:12:25.002 "data_size": 63488 00:12:25.002 } 00:12:25.002 ] 00:12:25.002 }' 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.002 19:41:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.262 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.521 "name": "raid_bdev1", 00:12:25.521 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:25.521 "strip_size_kb": 0, 00:12:25.521 "state": "online", 00:12:25.521 "raid_level": "raid1", 00:12:25.521 "superblock": true, 00:12:25.521 "num_base_bdevs": 2, 00:12:25.521 "num_base_bdevs_discovered": 1, 00:12:25.521 "num_base_bdevs_operational": 1, 00:12:25.521 "base_bdevs_list": [ 00:12:25.521 { 00:12:25.521 "name": null, 00:12:25.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.521 "is_configured": false, 00:12:25.521 "data_offset": 0, 00:12:25.521 "data_size": 63488 00:12:25.521 }, 00:12:25.521 { 00:12:25.521 "name": "BaseBdev2", 00:12:25.521 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:25.521 "is_configured": true, 00:12:25.521 "data_offset": 2048, 00:12:25.521 "data_size": 63488 00:12:25.521 } 00:12:25.521 ] 00:12:25.521 }' 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.521 [2024-12-12 19:41:08.230137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.521 [2024-12-12 19:41:08.245686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.521 19:41:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:25.521 [2024-12-12 19:41:08.247500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.458 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.458 "name": "raid_bdev1", 00:12:26.458 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:26.458 "strip_size_kb": 0, 00:12:26.458 "state": "online", 00:12:26.458 "raid_level": "raid1", 00:12:26.458 "superblock": true, 00:12:26.458 "num_base_bdevs": 2, 00:12:26.458 "num_base_bdevs_discovered": 2, 00:12:26.458 "num_base_bdevs_operational": 2, 00:12:26.458 "process": { 00:12:26.458 "type": "rebuild", 00:12:26.458 "target": "spare", 00:12:26.458 "progress": { 00:12:26.458 "blocks": 20480, 00:12:26.458 "percent": 32 00:12:26.458 } 00:12:26.458 }, 00:12:26.458 "base_bdevs_list": [ 00:12:26.458 { 00:12:26.458 "name": "spare", 00:12:26.458 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:26.458 "is_configured": true, 00:12:26.458 "data_offset": 2048, 00:12:26.458 "data_size": 63488 00:12:26.458 }, 00:12:26.458 { 00:12:26.458 "name": "BaseBdev2", 00:12:26.458 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:26.458 "is_configured": true, 00:12:26.458 "data_offset": 2048, 00:12:26.458 "data_size": 63488 00:12:26.458 } 00:12:26.458 ] 00:12:26.458 }' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:26.718 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.718 "name": "raid_bdev1", 00:12:26.718 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:26.718 "strip_size_kb": 0, 00:12:26.718 "state": "online", 00:12:26.718 "raid_level": "raid1", 00:12:26.718 "superblock": true, 00:12:26.718 "num_base_bdevs": 2, 00:12:26.718 "num_base_bdevs_discovered": 2, 00:12:26.718 "num_base_bdevs_operational": 2, 00:12:26.718 "process": { 00:12:26.718 "type": "rebuild", 00:12:26.718 "target": "spare", 00:12:26.718 "progress": { 00:12:26.718 "blocks": 22528, 00:12:26.718 "percent": 35 00:12:26.718 } 00:12:26.718 }, 00:12:26.718 "base_bdevs_list": [ 00:12:26.718 { 00:12:26.718 "name": "spare", 00:12:26.718 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:26.718 "is_configured": true, 00:12:26.718 "data_offset": 2048, 00:12:26.718 "data_size": 63488 00:12:26.718 }, 00:12:26.718 { 00:12:26.718 "name": "BaseBdev2", 00:12:26.718 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:26.718 "is_configured": true, 00:12:26.718 "data_offset": 2048, 00:12:26.718 "data_size": 63488 00:12:26.718 } 00:12:26.718 ] 00:12:26.718 }' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.718 19:41:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.096 "name": "raid_bdev1", 00:12:28.096 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:28.096 "strip_size_kb": 0, 00:12:28.096 "state": "online", 00:12:28.096 "raid_level": "raid1", 00:12:28.096 "superblock": true, 00:12:28.096 "num_base_bdevs": 2, 00:12:28.096 "num_base_bdevs_discovered": 2, 00:12:28.096 "num_base_bdevs_operational": 2, 00:12:28.096 "process": { 00:12:28.096 "type": "rebuild", 00:12:28.096 "target": "spare", 00:12:28.096 "progress": { 00:12:28.096 "blocks": 45056, 00:12:28.096 "percent": 70 00:12:28.096 } 00:12:28.096 }, 00:12:28.096 "base_bdevs_list": [ 00:12:28.096 { 00:12:28.096 "name": "spare", 00:12:28.096 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:28.096 "is_configured": true, 00:12:28.096 "data_offset": 2048, 00:12:28.096 "data_size": 63488 00:12:28.096 }, 00:12:28.096 { 00:12:28.096 "name": "BaseBdev2", 00:12:28.096 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:28.096 "is_configured": true, 00:12:28.096 "data_offset": 2048, 00:12:28.096 "data_size": 63488 00:12:28.096 } 00:12:28.096 ] 00:12:28.096 }' 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.096 19:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.665 [2024-12-12 19:41:11.360611] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.665 [2024-12-12 19:41:11.360680] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.665 [2024-12-12 19:41:11.360776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.924 "name": "raid_bdev1", 00:12:28.924 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:28.924 "strip_size_kb": 0, 00:12:28.924 "state": "online", 00:12:28.924 "raid_level": "raid1", 00:12:28.924 "superblock": true, 00:12:28.924 "num_base_bdevs": 2, 00:12:28.924 "num_base_bdevs_discovered": 2, 00:12:28.924 "num_base_bdevs_operational": 2, 00:12:28.924 "base_bdevs_list": [ 00:12:28.924 { 00:12:28.924 "name": "spare", 00:12:28.924 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:28.924 "is_configured": true, 00:12:28.924 "data_offset": 2048, 00:12:28.924 "data_size": 63488 00:12:28.924 }, 00:12:28.924 { 00:12:28.924 "name": "BaseBdev2", 00:12:28.924 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:28.924 "is_configured": true, 00:12:28.924 "data_offset": 2048, 00:12:28.924 "data_size": 63488 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 }' 00:12:28.924 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.182 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:29.182 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.182 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.183 "name": "raid_bdev1", 00:12:29.183 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:29.183 "strip_size_kb": 0, 00:12:29.183 "state": "online", 00:12:29.183 "raid_level": "raid1", 00:12:29.183 "superblock": true, 00:12:29.183 "num_base_bdevs": 2, 00:12:29.183 "num_base_bdevs_discovered": 2, 00:12:29.183 "num_base_bdevs_operational": 2, 00:12:29.183 "base_bdevs_list": [ 00:12:29.183 { 00:12:29.183 "name": "spare", 00:12:29.183 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:29.183 "is_configured": true, 00:12:29.183 "data_offset": 2048, 00:12:29.183 "data_size": 63488 00:12:29.183 }, 00:12:29.183 { 00:12:29.183 "name": "BaseBdev2", 00:12:29.183 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:29.183 "is_configured": true, 00:12:29.183 "data_offset": 2048, 00:12:29.183 "data_size": 63488 00:12:29.183 } 00:12:29.183 ] 00:12:29.183 }' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.183 "name": "raid_bdev1", 00:12:29.183 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:29.183 "strip_size_kb": 0, 00:12:29.183 "state": "online", 00:12:29.183 "raid_level": "raid1", 00:12:29.183 "superblock": true, 00:12:29.183 "num_base_bdevs": 2, 00:12:29.183 "num_base_bdevs_discovered": 2, 00:12:29.183 "num_base_bdevs_operational": 2, 00:12:29.183 "base_bdevs_list": [ 00:12:29.183 { 00:12:29.183 "name": "spare", 00:12:29.183 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:29.183 "is_configured": true, 00:12:29.183 "data_offset": 2048, 00:12:29.183 "data_size": 63488 00:12:29.183 }, 00:12:29.183 { 00:12:29.183 "name": "BaseBdev2", 00:12:29.183 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:29.183 "is_configured": true, 00:12:29.183 "data_offset": 2048, 00:12:29.183 "data_size": 63488 00:12:29.183 } 00:12:29.183 ] 00:12:29.183 }' 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.183 19:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.751 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.751 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.751 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.751 [2024-12-12 19:41:12.303679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.751 [2024-12-12 19:41:12.303750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.751 [2024-12-12 19:41:12.303853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.751 [2024-12-12 19:41:12.303970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.751 [2024-12-12 19:41:12.304018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:29.751 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.752 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:29.752 /dev/nbd0 00:12:30.011 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.011 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.012 1+0 records in 00:12:30.012 1+0 records out 00:12:30.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548403 s, 7.5 MB/s 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.012 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:30.272 /dev/nbd1 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.272 1+0 records in 00:12:30.272 1+0 records out 00:12:30.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265247 s, 15.4 MB/s 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.272 19:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.272 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.532 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.792 [2024-12-12 19:41:13.614332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.792 [2024-12-12 19:41:13.614427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.792 [2024-12-12 19:41:13.614468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:30.792 [2024-12-12 19:41:13.614497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.792 [2024-12-12 19:41:13.616730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.792 [2024-12-12 19:41:13.616797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.792 [2024-12-12 19:41:13.616899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:30.792 [2024-12-12 19:41:13.616954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.792 [2024-12-12 19:41:13.617124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.792 spare 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.792 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.054 [2024-12-12 19:41:13.717026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:31.054 [2024-12-12 19:41:13.717089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.054 [2024-12-12 19:41:13.717355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:31.054 [2024-12-12 19:41:13.717614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:31.054 [2024-12-12 19:41:13.717659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:31.054 [2024-12-12 19:41:13.717877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.054 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.054 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.054 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.055 "name": "raid_bdev1", 00:12:31.055 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:31.055 "strip_size_kb": 0, 00:12:31.055 "state": "online", 00:12:31.055 "raid_level": "raid1", 00:12:31.055 "superblock": true, 00:12:31.055 "num_base_bdevs": 2, 00:12:31.055 "num_base_bdevs_discovered": 2, 00:12:31.055 "num_base_bdevs_operational": 2, 00:12:31.055 "base_bdevs_list": [ 00:12:31.055 { 00:12:31.055 "name": "spare", 00:12:31.055 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:31.055 "is_configured": true, 00:12:31.055 "data_offset": 2048, 00:12:31.055 "data_size": 63488 00:12:31.055 }, 00:12:31.055 { 00:12:31.055 "name": "BaseBdev2", 00:12:31.055 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:31.055 "is_configured": true, 00:12:31.055 "data_offset": 2048, 00:12:31.055 "data_size": 63488 00:12:31.055 } 00:12:31.055 ] 00:12:31.055 }' 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.055 19:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.316 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.316 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.316 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.316 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.576 "name": "raid_bdev1", 00:12:31.576 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:31.576 "strip_size_kb": 0, 00:12:31.576 "state": "online", 00:12:31.576 "raid_level": "raid1", 00:12:31.576 "superblock": true, 00:12:31.576 "num_base_bdevs": 2, 00:12:31.576 "num_base_bdevs_discovered": 2, 00:12:31.576 "num_base_bdevs_operational": 2, 00:12:31.576 "base_bdevs_list": [ 00:12:31.576 { 00:12:31.576 "name": "spare", 00:12:31.576 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:31.576 "is_configured": true, 00:12:31.576 "data_offset": 2048, 00:12:31.576 "data_size": 63488 00:12:31.576 }, 00:12:31.576 { 00:12:31.576 "name": "BaseBdev2", 00:12:31.576 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:31.576 "is_configured": true, 00:12:31.576 "data_offset": 2048, 00:12:31.576 "data_size": 63488 00:12:31.576 } 00:12:31.576 ] 00:12:31.576 }' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 [2024-12-12 19:41:14.329177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.577 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.577 "name": "raid_bdev1", 00:12:31.577 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:31.577 "strip_size_kb": 0, 00:12:31.577 "state": "online", 00:12:31.577 "raid_level": "raid1", 00:12:31.577 "superblock": true, 00:12:31.577 "num_base_bdevs": 2, 00:12:31.577 "num_base_bdevs_discovered": 1, 00:12:31.577 "num_base_bdevs_operational": 1, 00:12:31.577 "base_bdevs_list": [ 00:12:31.577 { 00:12:31.577 "name": null, 00:12:31.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.577 "is_configured": false, 00:12:31.577 "data_offset": 0, 00:12:31.577 "data_size": 63488 00:12:31.577 }, 00:12:31.577 { 00:12:31.577 "name": "BaseBdev2", 00:12:31.577 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:31.577 "is_configured": true, 00:12:31.577 "data_offset": 2048, 00:12:31.577 "data_size": 63488 00:12:31.577 } 00:12:31.577 ] 00:12:31.577 }' 00:12:31.577 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.577 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.144 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.144 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.144 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.144 [2024-12-12 19:41:14.772673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.144 [2024-12-12 19:41:14.772934] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.144 [2024-12-12 19:41:14.772996] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.144 [2024-12-12 19:41:14.773065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.144 [2024-12-12 19:41:14.788878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:32.144 19:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.144 19:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:32.144 [2024-12-12 19:41:14.790737] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.086 "name": "raid_bdev1", 00:12:33.086 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:33.086 "strip_size_kb": 0, 00:12:33.086 "state": "online", 00:12:33.086 "raid_level": "raid1", 00:12:33.086 "superblock": true, 00:12:33.086 "num_base_bdevs": 2, 00:12:33.086 "num_base_bdevs_discovered": 2, 00:12:33.086 "num_base_bdevs_operational": 2, 00:12:33.086 "process": { 00:12:33.086 "type": "rebuild", 00:12:33.086 "target": "spare", 00:12:33.086 "progress": { 00:12:33.086 "blocks": 20480, 00:12:33.086 "percent": 32 00:12:33.086 } 00:12:33.086 }, 00:12:33.086 "base_bdevs_list": [ 00:12:33.086 { 00:12:33.086 "name": "spare", 00:12:33.086 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:33.086 "is_configured": true, 00:12:33.086 "data_offset": 2048, 00:12:33.086 "data_size": 63488 00:12:33.086 }, 00:12:33.086 { 00:12:33.086 "name": "BaseBdev2", 00:12:33.086 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:33.086 "is_configured": true, 00:12:33.086 "data_offset": 2048, 00:12:33.086 "data_size": 63488 00:12:33.086 } 00:12:33.086 ] 00:12:33.086 }' 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.086 19:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.345 [2024-12-12 19:41:15.930929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.345 [2024-12-12 19:41:15.995953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.345 [2024-12-12 19:41:15.996031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.345 [2024-12-12 19:41:15.996045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.345 [2024-12-12 19:41:15.996053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.345 "name": "raid_bdev1", 00:12:33.345 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:33.345 "strip_size_kb": 0, 00:12:33.345 "state": "online", 00:12:33.345 "raid_level": "raid1", 00:12:33.345 "superblock": true, 00:12:33.345 "num_base_bdevs": 2, 00:12:33.345 "num_base_bdevs_discovered": 1, 00:12:33.345 "num_base_bdevs_operational": 1, 00:12:33.345 "base_bdevs_list": [ 00:12:33.345 { 00:12:33.345 "name": null, 00:12:33.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.345 "is_configured": false, 00:12:33.345 "data_offset": 0, 00:12:33.345 "data_size": 63488 00:12:33.345 }, 00:12:33.345 { 00:12:33.345 "name": "BaseBdev2", 00:12:33.345 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:33.345 "is_configured": true, 00:12:33.345 "data_offset": 2048, 00:12:33.345 "data_size": 63488 00:12:33.345 } 00:12:33.345 ] 00:12:33.345 }' 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.345 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.605 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.605 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.605 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.605 [2024-12-12 19:41:16.411064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.605 [2024-12-12 19:41:16.411165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.605 [2024-12-12 19:41:16.411202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:33.605 [2024-12-12 19:41:16.411231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.605 [2024-12-12 19:41:16.411762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.605 [2024-12-12 19:41:16.411830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.605 [2024-12-12 19:41:16.411964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.605 [2024-12-12 19:41:16.412006] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:33.605 [2024-12-12 19:41:16.412066] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.605 [2024-12-12 19:41:16.412132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.605 [2024-12-12 19:41:16.428123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:33.605 spare 00:12:33.605 19:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.605 [2024-12-12 19:41:16.429953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.605 19:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.983 "name": "raid_bdev1", 00:12:34.983 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:34.983 "strip_size_kb": 0, 00:12:34.983 "state": "online", 00:12:34.983 "raid_level": "raid1", 00:12:34.983 "superblock": true, 00:12:34.983 "num_base_bdevs": 2, 00:12:34.983 "num_base_bdevs_discovered": 2, 00:12:34.983 "num_base_bdevs_operational": 2, 00:12:34.983 "process": { 00:12:34.983 "type": "rebuild", 00:12:34.983 "target": "spare", 00:12:34.983 "progress": { 00:12:34.983 "blocks": 20480, 00:12:34.983 "percent": 32 00:12:34.983 } 00:12:34.983 }, 00:12:34.983 "base_bdevs_list": [ 00:12:34.983 { 00:12:34.983 "name": "spare", 00:12:34.983 "uuid": "f16bfec2-3d2f-5f75-b8c9-ab2dd36b12ee", 00:12:34.983 "is_configured": true, 00:12:34.983 "data_offset": 2048, 00:12:34.983 "data_size": 63488 00:12:34.983 }, 00:12:34.983 { 00:12:34.983 "name": "BaseBdev2", 00:12:34.983 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:34.983 "is_configured": true, 00:12:34.983 "data_offset": 2048, 00:12:34.983 "data_size": 63488 00:12:34.983 } 00:12:34.983 ] 00:12:34.983 }' 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.983 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.984 [2024-12-12 19:41:17.577877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.984 [2024-12-12 19:41:17.635473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.984 [2024-12-12 19:41:17.635536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.984 [2024-12-12 19:41:17.635587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.984 [2024-12-12 19:41:17.635595] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.984 "name": "raid_bdev1", 00:12:34.984 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:34.984 "strip_size_kb": 0, 00:12:34.984 "state": "online", 00:12:34.984 "raid_level": "raid1", 00:12:34.984 "superblock": true, 00:12:34.984 "num_base_bdevs": 2, 00:12:34.984 "num_base_bdevs_discovered": 1, 00:12:34.984 "num_base_bdevs_operational": 1, 00:12:34.984 "base_bdevs_list": [ 00:12:34.984 { 00:12:34.984 "name": null, 00:12:34.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.984 "is_configured": false, 00:12:34.984 "data_offset": 0, 00:12:34.984 "data_size": 63488 00:12:34.984 }, 00:12:34.984 { 00:12:34.984 "name": "BaseBdev2", 00:12:34.984 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:34.984 "is_configured": true, 00:12:34.984 "data_offset": 2048, 00:12:34.984 "data_size": 63488 00:12:34.984 } 00:12:34.984 ] 00:12:34.984 }' 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.984 19:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.243 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.244 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.244 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.504 "name": "raid_bdev1", 00:12:35.504 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:35.504 "strip_size_kb": 0, 00:12:35.504 "state": "online", 00:12:35.504 "raid_level": "raid1", 00:12:35.504 "superblock": true, 00:12:35.504 "num_base_bdevs": 2, 00:12:35.504 "num_base_bdevs_discovered": 1, 00:12:35.504 "num_base_bdevs_operational": 1, 00:12:35.504 "base_bdevs_list": [ 00:12:35.504 { 00:12:35.504 "name": null, 00:12:35.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.504 "is_configured": false, 00:12:35.504 "data_offset": 0, 00:12:35.504 "data_size": 63488 00:12:35.504 }, 00:12:35.504 { 00:12:35.504 "name": "BaseBdev2", 00:12:35.504 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:35.504 "is_configured": true, 00:12:35.504 "data_offset": 2048, 00:12:35.504 "data_size": 63488 00:12:35.504 } 00:12:35.504 ] 00:12:35.504 }' 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.504 [2024-12-12 19:41:18.192175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.504 [2024-12-12 19:41:18.192277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.504 [2024-12-12 19:41:18.192317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:35.504 [2024-12-12 19:41:18.192359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.504 [2024-12-12 19:41:18.192897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.504 [2024-12-12 19:41:18.192955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.504 [2024-12-12 19:41:18.193085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:35.504 [2024-12-12 19:41:18.193127] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.504 [2024-12-12 19:41:18.193168] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:35.504 [2024-12-12 19:41:18.193208] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:35.504 BaseBdev1 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.504 19:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.441 "name": "raid_bdev1", 00:12:36.441 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:36.441 "strip_size_kb": 0, 00:12:36.441 "state": "online", 00:12:36.441 "raid_level": "raid1", 00:12:36.441 "superblock": true, 00:12:36.441 "num_base_bdevs": 2, 00:12:36.441 "num_base_bdevs_discovered": 1, 00:12:36.441 "num_base_bdevs_operational": 1, 00:12:36.441 "base_bdevs_list": [ 00:12:36.441 { 00:12:36.441 "name": null, 00:12:36.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.441 "is_configured": false, 00:12:36.441 "data_offset": 0, 00:12:36.441 "data_size": 63488 00:12:36.441 }, 00:12:36.441 { 00:12:36.441 "name": "BaseBdev2", 00:12:36.441 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:36.441 "is_configured": true, 00:12:36.441 "data_offset": 2048, 00:12:36.441 "data_size": 63488 00:12:36.441 } 00:12:36.441 ] 00:12:36.441 }' 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.441 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.011 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.011 "name": "raid_bdev1", 00:12:37.011 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:37.011 "strip_size_kb": 0, 00:12:37.011 "state": "online", 00:12:37.011 "raid_level": "raid1", 00:12:37.011 "superblock": true, 00:12:37.011 "num_base_bdevs": 2, 00:12:37.011 "num_base_bdevs_discovered": 1, 00:12:37.012 "num_base_bdevs_operational": 1, 00:12:37.012 "base_bdevs_list": [ 00:12:37.012 { 00:12:37.012 "name": null, 00:12:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.012 "is_configured": false, 00:12:37.012 "data_offset": 0, 00:12:37.012 "data_size": 63488 00:12:37.012 }, 00:12:37.012 { 00:12:37.012 "name": "BaseBdev2", 00:12:37.012 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:37.012 "is_configured": true, 00:12:37.012 "data_offset": 2048, 00:12:37.012 "data_size": 63488 00:12:37.012 } 00:12:37.012 ] 00:12:37.012 }' 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.012 [2024-12-12 19:41:19.789704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.012 [2024-12-12 19:41:19.789939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:37.012 [2024-12-12 19:41:19.790015] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:37.012 request: 00:12:37.012 { 00:12:37.012 "base_bdev": "BaseBdev1", 00:12:37.012 "raid_bdev": "raid_bdev1", 00:12:37.012 "method": "bdev_raid_add_base_bdev", 00:12:37.012 "req_id": 1 00:12:37.012 } 00:12:37.012 Got JSON-RPC error response 00:12:37.012 response: 00:12:37.012 { 00:12:37.012 "code": -22, 00:12:37.012 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:37.012 } 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.012 19:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.393 "name": "raid_bdev1", 00:12:38.393 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:38.393 "strip_size_kb": 0, 00:12:38.393 "state": "online", 00:12:38.393 "raid_level": "raid1", 00:12:38.393 "superblock": true, 00:12:38.393 "num_base_bdevs": 2, 00:12:38.393 "num_base_bdevs_discovered": 1, 00:12:38.393 "num_base_bdevs_operational": 1, 00:12:38.393 "base_bdevs_list": [ 00:12:38.393 { 00:12:38.393 "name": null, 00:12:38.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.393 "is_configured": false, 00:12:38.393 "data_offset": 0, 00:12:38.393 "data_size": 63488 00:12:38.393 }, 00:12:38.393 { 00:12:38.393 "name": "BaseBdev2", 00:12:38.393 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:38.393 "is_configured": true, 00:12:38.393 "data_offset": 2048, 00:12:38.393 "data_size": 63488 00:12:38.393 } 00:12:38.393 ] 00:12:38.393 }' 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.393 19:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.393 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.653 "name": "raid_bdev1", 00:12:38.653 "uuid": "eb9f627b-923a-456d-b789-4c8580406517", 00:12:38.653 "strip_size_kb": 0, 00:12:38.653 "state": "online", 00:12:38.653 "raid_level": "raid1", 00:12:38.653 "superblock": true, 00:12:38.653 "num_base_bdevs": 2, 00:12:38.653 "num_base_bdevs_discovered": 1, 00:12:38.653 "num_base_bdevs_operational": 1, 00:12:38.653 "base_bdevs_list": [ 00:12:38.653 { 00:12:38.653 "name": null, 00:12:38.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.653 "is_configured": false, 00:12:38.653 "data_offset": 0, 00:12:38.653 "data_size": 63488 00:12:38.653 }, 00:12:38.653 { 00:12:38.653 "name": "BaseBdev2", 00:12:38.653 "uuid": "5864b7b5-32c4-5b22-8237-33bc37f9a70a", 00:12:38.653 "is_configured": true, 00:12:38.653 "data_offset": 2048, 00:12:38.653 "data_size": 63488 00:12:38.653 } 00:12:38.653 ] 00:12:38.653 }' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77425 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77425 ']' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77425 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77425 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.653 killing process with pid 77425 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77425' 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77425 00:12:38.653 Received shutdown signal, test time was about 60.000000 seconds 00:12:38.653 00:12:38.653 Latency(us) 00:12:38.653 [2024-12-12T19:41:21.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.653 [2024-12-12T19:41:21.498Z] =================================================================================================================== 00:12:38.653 [2024-12-12T19:41:21.498Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.653 [2024-12-12 19:41:21.410135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.653 [2024-12-12 19:41:21.410280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.653 19:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77425 00:12:38.653 [2024-12-12 19:41:21.410334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.653 [2024-12-12 19:41:21.410345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:38.913 [2024-12-12 19:41:21.713623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:40.292 00:12:40.292 real 0m22.905s 00:12:40.292 user 0m27.866s 00:12:40.292 sys 0m3.588s 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.292 ************************************ 00:12:40.292 END TEST raid_rebuild_test_sb 00:12:40.292 ************************************ 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 19:41:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:40.292 19:41:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:40.292 19:41:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.292 19:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 ************************************ 00:12:40.292 START TEST raid_rebuild_test_io 00:12:40.292 ************************************ 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78155 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78155 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78155 ']' 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.292 19:41:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:40.292 Zero copy mechanism will not be used. 00:12:40.292 [2024-12-12 19:41:22.983786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:40.292 [2024-12-12 19:41:22.983899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78155 ] 00:12:40.551 [2024-12-12 19:41:23.139278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.551 [2024-12-12 19:41:23.273587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.812 [2024-12-12 19:41:23.502619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.812 [2024-12-12 19:41:23.502660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.080 BaseBdev1_malloc 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.080 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.080 [2024-12-12 19:41:23.870842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.080 [2024-12-12 19:41:23.871011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.080 [2024-12-12 19:41:23.871065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.080 [2024-12-12 19:41:23.871123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.080 [2024-12-12 19:41:23.873728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.081 [2024-12-12 19:41:23.873819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.081 BaseBdev1 00:12:41.081 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.081 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.081 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.081 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.081 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 BaseBdev2_malloc 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 [2024-12-12 19:41:23.933755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:41.347 [2024-12-12 19:41:23.933900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.347 [2024-12-12 19:41:23.933945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:41.347 [2024-12-12 19:41:23.934005] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.347 [2024-12-12 19:41:23.936474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.347 [2024-12-12 19:41:23.936595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.347 BaseBdev2 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 spare_malloc 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.347 19:41:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 spare_delay 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.347 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 [2024-12-12 19:41:24.018100] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:41.347 [2024-12-12 19:41:24.018176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.348 [2024-12-12 19:41:24.018199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:41.348 [2024-12-12 19:41:24.018212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.348 [2024-12-12 19:41:24.020655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.348 [2024-12-12 19:41:24.020704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:41.348 spare 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.348 [2024-12-12 19:41:24.030146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.348 [2024-12-12 19:41:24.032330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.348 [2024-12-12 19:41:24.032481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:41.348 [2024-12-12 19:41:24.032537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.348 [2024-12-12 19:41:24.032871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:41.348 [2024-12-12 19:41:24.033113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:41.348 [2024-12-12 19:41:24.033164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:41.348 [2024-12-12 19:41:24.033421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.348 "name": "raid_bdev1", 00:12:41.348 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:41.348 "strip_size_kb": 0, 00:12:41.348 "state": "online", 00:12:41.348 "raid_level": "raid1", 00:12:41.348 "superblock": false, 00:12:41.348 "num_base_bdevs": 2, 00:12:41.348 "num_base_bdevs_discovered": 2, 00:12:41.348 "num_base_bdevs_operational": 2, 00:12:41.348 "base_bdevs_list": [ 00:12:41.348 { 00:12:41.348 "name": "BaseBdev1", 00:12:41.348 "uuid": "d87a2380-4c97-5f1e-a9e3-7881eb9a2a4e", 00:12:41.348 "is_configured": true, 00:12:41.348 "data_offset": 0, 00:12:41.348 "data_size": 65536 00:12:41.348 }, 00:12:41.348 { 00:12:41.348 "name": "BaseBdev2", 00:12:41.348 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:41.348 "is_configured": true, 00:12:41.348 "data_offset": 0, 00:12:41.348 "data_size": 65536 00:12:41.348 } 00:12:41.348 ] 00:12:41.348 }' 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.348 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 [2024-12-12 19:41:24.469667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 [2024-12-12 19:41:24.545226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.916 "name": "raid_bdev1", 00:12:41.916 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:41.916 "strip_size_kb": 0, 00:12:41.916 "state": "online", 00:12:41.916 "raid_level": "raid1", 00:12:41.916 "superblock": false, 00:12:41.916 "num_base_bdevs": 2, 00:12:41.916 "num_base_bdevs_discovered": 1, 00:12:41.916 "num_base_bdevs_operational": 1, 00:12:41.916 "base_bdevs_list": [ 00:12:41.916 { 00:12:41.916 "name": null, 00:12:41.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.916 "is_configured": false, 00:12:41.916 "data_offset": 0, 00:12:41.916 "data_size": 65536 00:12:41.916 }, 00:12:41.916 { 00:12:41.916 "name": "BaseBdev2", 00:12:41.916 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:41.916 "is_configured": true, 00:12:41.916 "data_offset": 0, 00:12:41.916 "data_size": 65536 00:12:41.916 } 00:12:41.916 ] 00:12:41.916 }' 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.916 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.916 [2024-12-12 19:41:24.642824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:41.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.917 Zero copy mechanism will not be used. 00:12:41.917 Running I/O for 60 seconds... 00:12:42.176 19:41:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:42.176 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.176 19:41:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.176 [2024-12-12 19:41:24.970200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.176 19:41:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.176 19:41:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:42.437 [2024-12-12 19:41:25.039760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:42.437 [2024-12-12 19:41:25.042219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.437 [2024-12-12 19:41:25.169749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:42.437 [2024-12-12 19:41:25.170663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:42.695 [2024-12-12 19:41:25.392229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:42.695 [2024-12-12 19:41:25.392651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.212 156.00 IOPS, 468.00 MiB/s [2024-12-12T19:41:26.057Z] [2024-12-12 19:41:25.853482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.212 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.472 "name": "raid_bdev1", 00:12:43.472 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:43.472 "strip_size_kb": 0, 00:12:43.472 "state": "online", 00:12:43.472 "raid_level": "raid1", 00:12:43.472 "superblock": false, 00:12:43.472 "num_base_bdevs": 2, 00:12:43.472 "num_base_bdevs_discovered": 2, 00:12:43.472 "num_base_bdevs_operational": 2, 00:12:43.472 "process": { 00:12:43.472 "type": "rebuild", 00:12:43.472 "target": "spare", 00:12:43.472 "progress": { 00:12:43.472 "blocks": 12288, 00:12:43.472 "percent": 18 00:12:43.472 } 00:12:43.472 }, 00:12:43.472 "base_bdevs_list": [ 00:12:43.472 { 00:12:43.472 "name": "spare", 00:12:43.472 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:43.472 "is_configured": true, 00:12:43.472 "data_offset": 0, 00:12:43.472 "data_size": 65536 00:12:43.472 }, 00:12:43.472 { 00:12:43.472 "name": "BaseBdev2", 00:12:43.472 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:43.472 "is_configured": true, 00:12:43.472 "data_offset": 0, 00:12:43.472 "data_size": 65536 00:12:43.472 } 00:12:43.472 ] 00:12:43.472 }' 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.472 [2024-12-12 19:41:26.088486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.472 [2024-12-12 19:41:26.149662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.472 [2024-12-12 19:41:26.238745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.472 [2024-12-12 19:41:26.242467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.472 [2024-12-12 19:41:26.242603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.472 [2024-12-12 19:41:26.242647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.472 [2024-12-12 19:41:26.288685] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.472 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.731 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.731 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.731 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.731 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.732 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.732 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.732 "name": "raid_bdev1", 00:12:43.732 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:43.732 "strip_size_kb": 0, 00:12:43.732 "state": "online", 00:12:43.732 "raid_level": "raid1", 00:12:43.732 "superblock": false, 00:12:43.732 "num_base_bdevs": 2, 00:12:43.732 "num_base_bdevs_discovered": 1, 00:12:43.732 "num_base_bdevs_operational": 1, 00:12:43.732 "base_bdevs_list": [ 00:12:43.732 { 00:12:43.732 "name": null, 00:12:43.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.732 "is_configured": false, 00:12:43.732 "data_offset": 0, 00:12:43.732 "data_size": 65536 00:12:43.732 }, 00:12:43.732 { 00:12:43.732 "name": "BaseBdev2", 00:12:43.732 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:43.732 "is_configured": true, 00:12:43.732 "data_offset": 0, 00:12:43.732 "data_size": 65536 00:12:43.732 } 00:12:43.732 ] 00:12:43.732 }' 00:12:43.732 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.732 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.991 167.50 IOPS, 502.50 MiB/s [2024-12-12T19:41:26.836Z] 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.991 "name": "raid_bdev1", 00:12:43.991 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:43.991 "strip_size_kb": 0, 00:12:43.991 "state": "online", 00:12:43.991 "raid_level": "raid1", 00:12:43.991 "superblock": false, 00:12:43.991 "num_base_bdevs": 2, 00:12:43.991 "num_base_bdevs_discovered": 1, 00:12:43.991 "num_base_bdevs_operational": 1, 00:12:43.991 "base_bdevs_list": [ 00:12:43.991 { 00:12:43.991 "name": null, 00:12:43.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.991 "is_configured": false, 00:12:43.991 "data_offset": 0, 00:12:43.991 "data_size": 65536 00:12:43.991 }, 00:12:43.991 { 00:12:43.991 "name": "BaseBdev2", 00:12:43.991 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:43.991 "is_configured": true, 00:12:43.991 "data_offset": 0, 00:12:43.991 "data_size": 65536 00:12:43.991 } 00:12:43.991 ] 00:12:43.991 }' 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.991 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.250 [2024-12-12 19:41:26.892766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.250 19:41:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:44.250 [2024-12-12 19:41:26.960134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:44.250 [2024-12-12 19:41:26.962600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.509 [2024-12-12 19:41:27.109312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.509 [2024-12-12 19:41:27.343247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.509 [2024-12-12 19:41:27.344017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:45.077 161.67 IOPS, 485.00 MiB/s [2024-12-12T19:41:27.922Z] [2024-12-12 19:41:27.673699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.337 19:41:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.337 "name": "raid_bdev1", 00:12:45.337 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:45.337 "strip_size_kb": 0, 00:12:45.337 "state": "online", 00:12:45.337 "raid_level": "raid1", 00:12:45.337 "superblock": false, 00:12:45.337 "num_base_bdevs": 2, 00:12:45.337 "num_base_bdevs_discovered": 2, 00:12:45.337 "num_base_bdevs_operational": 2, 00:12:45.337 "process": { 00:12:45.337 "type": "rebuild", 00:12:45.337 "target": "spare", 00:12:45.337 "progress": { 00:12:45.337 "blocks": 12288, 00:12:45.337 "percent": 18 00:12:45.337 } 00:12:45.337 }, 00:12:45.337 "base_bdevs_list": [ 00:12:45.337 { 00:12:45.337 "name": "spare", 00:12:45.337 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:45.337 "is_configured": true, 00:12:45.337 "data_offset": 0, 00:12:45.337 "data_size": 65536 00:12:45.337 }, 00:12:45.337 { 00:12:45.337 "name": "BaseBdev2", 00:12:45.337 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:45.337 "is_configured": true, 00:12:45.337 "data_offset": 0, 00:12:45.337 "data_size": 65536 00:12:45.337 } 00:12:45.337 ] 00:12:45.337 }' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.337 [2024-12-12 19:41:28.041483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.337 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.337 "name": "raid_bdev1", 00:12:45.337 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:45.337 "strip_size_kb": 0, 00:12:45.337 "state": "online", 00:12:45.337 "raid_level": "raid1", 00:12:45.337 "superblock": false, 00:12:45.337 "num_base_bdevs": 2, 00:12:45.338 "num_base_bdevs_discovered": 2, 00:12:45.338 "num_base_bdevs_operational": 2, 00:12:45.338 "process": { 00:12:45.338 "type": "rebuild", 00:12:45.338 "target": "spare", 00:12:45.338 "progress": { 00:12:45.338 "blocks": 14336, 00:12:45.338 "percent": 21 00:12:45.338 } 00:12:45.338 }, 00:12:45.338 "base_bdevs_list": [ 00:12:45.338 { 00:12:45.338 "name": "spare", 00:12:45.338 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:45.338 "is_configured": true, 00:12:45.338 "data_offset": 0, 00:12:45.338 "data_size": 65536 00:12:45.338 }, 00:12:45.338 { 00:12:45.338 "name": "BaseBdev2", 00:12:45.338 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:45.338 "is_configured": true, 00:12:45.338 "data_offset": 0, 00:12:45.338 "data_size": 65536 00:12:45.338 } 00:12:45.338 ] 00:12:45.338 }' 00:12:45.338 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.338 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.338 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.598 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.598 19:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.598 [2024-12-12 19:41:28.267644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:45.598 [2024-12-12 19:41:28.268391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:46.116 132.50 IOPS, 397.50 MiB/s [2024-12-12T19:41:28.961Z] [2024-12-12 19:41:28.729923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:46.376 [2024-12-12 19:41:29.169925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.376 19:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.635 "name": "raid_bdev1", 00:12:46.635 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:46.635 "strip_size_kb": 0, 00:12:46.635 "state": "online", 00:12:46.635 "raid_level": "raid1", 00:12:46.635 "superblock": false, 00:12:46.635 "num_base_bdevs": 2, 00:12:46.635 "num_base_bdevs_discovered": 2, 00:12:46.635 "num_base_bdevs_operational": 2, 00:12:46.635 "process": { 00:12:46.635 "type": "rebuild", 00:12:46.635 "target": "spare", 00:12:46.635 "progress": { 00:12:46.635 "blocks": 28672, 00:12:46.635 "percent": 43 00:12:46.635 } 00:12:46.635 }, 00:12:46.635 "base_bdevs_list": [ 00:12:46.635 { 00:12:46.635 "name": "spare", 00:12:46.635 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:46.635 "is_configured": true, 00:12:46.635 "data_offset": 0, 00:12:46.635 "data_size": 65536 00:12:46.635 }, 00:12:46.635 { 00:12:46.635 "name": "BaseBdev2", 00:12:46.635 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:46.635 "is_configured": true, 00:12:46.635 "data_offset": 0, 00:12:46.635 "data_size": 65536 00:12:46.635 } 00:12:46.635 ] 00:12:46.635 }' 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.635 19:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.464 118.40 IOPS, 355.20 MiB/s [2024-12-12T19:41:30.309Z] [2024-12-12 19:41:30.061821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:47.464 [2024-12-12 19:41:30.062946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:47.464 [2024-12-12 19:41:30.274792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:47.464 [2024-12-12 19:41:30.275482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.723 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.723 "name": "raid_bdev1", 00:12:47.723 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:47.724 "strip_size_kb": 0, 00:12:47.724 "state": "online", 00:12:47.724 "raid_level": "raid1", 00:12:47.724 "superblock": false, 00:12:47.724 "num_base_bdevs": 2, 00:12:47.724 "num_base_bdevs_discovered": 2, 00:12:47.724 "num_base_bdevs_operational": 2, 00:12:47.724 "process": { 00:12:47.724 "type": "rebuild", 00:12:47.724 "target": "spare", 00:12:47.724 "progress": { 00:12:47.724 "blocks": 47104, 00:12:47.724 "percent": 71 00:12:47.724 } 00:12:47.724 }, 00:12:47.724 "base_bdevs_list": [ 00:12:47.724 { 00:12:47.724 "name": "spare", 00:12:47.724 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:47.724 "is_configured": true, 00:12:47.724 "data_offset": 0, 00:12:47.724 "data_size": 65536 00:12:47.724 }, 00:12:47.724 { 00:12:47.724 "name": "BaseBdev2", 00:12:47.724 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:47.724 "is_configured": true, 00:12:47.724 "data_offset": 0, 00:12:47.724 "data_size": 65536 00:12:47.724 } 00:12:47.724 ] 00:12:47.724 }' 00:12:47.724 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.724 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.724 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.724 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.724 19:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.242 105.50 IOPS, 316.50 MiB/s [2024-12-12T19:41:31.087Z] [2024-12-12 19:41:30.943950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:48.812 [2024-12-12 19:41:31.388942] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.812 "name": "raid_bdev1", 00:12:48.812 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:48.812 "strip_size_kb": 0, 00:12:48.812 "state": "online", 00:12:48.812 "raid_level": "raid1", 00:12:48.812 "superblock": false, 00:12:48.812 "num_base_bdevs": 2, 00:12:48.812 "num_base_bdevs_discovered": 2, 00:12:48.812 "num_base_bdevs_operational": 2, 00:12:48.812 "process": { 00:12:48.812 "type": "rebuild", 00:12:48.812 "target": "spare", 00:12:48.812 "progress": { 00:12:48.812 "blocks": 65536, 00:12:48.812 "percent": 100 00:12:48.812 } 00:12:48.812 }, 00:12:48.812 "base_bdevs_list": [ 00:12:48.812 { 00:12:48.812 "name": "spare", 00:12:48.812 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:48.812 "is_configured": true, 00:12:48.812 "data_offset": 0, 00:12:48.812 "data_size": 65536 00:12:48.812 }, 00:12:48.812 { 00:12:48.812 "name": "BaseBdev2", 00:12:48.812 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:48.812 "is_configured": true, 00:12:48.812 "data_offset": 0, 00:12:48.812 "data_size": 65536 00:12:48.812 } 00:12:48.812 ] 00:12:48.812 }' 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.812 [2024-12-12 19:41:31.494663] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:48.812 [2024-12-12 19:41:31.498579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.812 19:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.752 97.14 IOPS, 291.43 MiB/s [2024-12-12T19:41:32.597Z] 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.752 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.012 "name": "raid_bdev1", 00:12:50.012 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:50.012 "strip_size_kb": 0, 00:12:50.012 "state": "online", 00:12:50.012 "raid_level": "raid1", 00:12:50.012 "superblock": false, 00:12:50.012 "num_base_bdevs": 2, 00:12:50.012 "num_base_bdevs_discovered": 2, 00:12:50.012 "num_base_bdevs_operational": 2, 00:12:50.012 "base_bdevs_list": [ 00:12:50.012 { 00:12:50.012 "name": "spare", 00:12:50.012 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 0, 00:12:50.012 "data_size": 65536 00:12:50.012 }, 00:12:50.012 { 00:12:50.012 "name": "BaseBdev2", 00:12:50.012 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 0, 00:12:50.012 "data_size": 65536 00:12:50.012 } 00:12:50.012 ] 00:12:50.012 }' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.012 89.38 IOPS, 268.12 MiB/s [2024-12-12T19:41:32.857Z] 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.012 "name": "raid_bdev1", 00:12:50.012 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:50.012 "strip_size_kb": 0, 00:12:50.012 "state": "online", 00:12:50.012 "raid_level": "raid1", 00:12:50.012 "superblock": false, 00:12:50.012 "num_base_bdevs": 2, 00:12:50.012 "num_base_bdevs_discovered": 2, 00:12:50.012 "num_base_bdevs_operational": 2, 00:12:50.012 "base_bdevs_list": [ 00:12:50.012 { 00:12:50.012 "name": "spare", 00:12:50.012 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 0, 00:12:50.012 "data_size": 65536 00:12:50.012 }, 00:12:50.012 { 00:12:50.012 "name": "BaseBdev2", 00:12:50.012 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 0, 00:12:50.012 "data_size": 65536 00:12:50.012 } 00:12:50.012 ] 00:12:50.012 }' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.012 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.013 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.013 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.013 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.013 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.272 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.272 "name": "raid_bdev1", 00:12:50.272 "uuid": "fff98135-5bd9-43de-b629-e127da9e5620", 00:12:50.273 "strip_size_kb": 0, 00:12:50.273 "state": "online", 00:12:50.273 "raid_level": "raid1", 00:12:50.273 "superblock": false, 00:12:50.273 "num_base_bdevs": 2, 00:12:50.273 "num_base_bdevs_discovered": 2, 00:12:50.273 "num_base_bdevs_operational": 2, 00:12:50.273 "base_bdevs_list": [ 00:12:50.273 { 00:12:50.273 "name": "spare", 00:12:50.273 "uuid": "cf9149aa-895d-5b02-9162-57591195e72e", 00:12:50.273 "is_configured": true, 00:12:50.273 "data_offset": 0, 00:12:50.273 "data_size": 65536 00:12:50.273 }, 00:12:50.273 { 00:12:50.273 "name": "BaseBdev2", 00:12:50.273 "uuid": "7477c5df-a9e3-5217-baef-49e63e244adc", 00:12:50.273 "is_configured": true, 00:12:50.273 "data_offset": 0, 00:12:50.273 "data_size": 65536 00:12:50.273 } 00:12:50.273 ] 00:12:50.273 }' 00:12:50.273 19:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.273 19:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.533 [2024-12-12 19:41:33.226094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.533 [2024-12-12 19:41:33.226185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.533 00:12:50.533 Latency(us) 00:12:50.533 [2024-12-12T19:41:33.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.533 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:50.533 raid_bdev1 : 8.64 85.21 255.64 0.00 0.00 16745.53 314.80 114015.47 00:12:50.533 [2024-12-12T19:41:33.378Z] =================================================================================================================== 00:12:50.533 [2024-12-12T19:41:33.378Z] Total : 85.21 255.64 0.00 0.00 16745.53 314.80 114015.47 00:12:50.533 [2024-12-12 19:41:33.289430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.533 { 00:12:50.533 "results": [ 00:12:50.533 { 00:12:50.533 "job": "raid_bdev1", 00:12:50.533 "core_mask": "0x1", 00:12:50.533 "workload": "randrw", 00:12:50.533 "percentage": 50, 00:12:50.533 "status": "finished", 00:12:50.533 "queue_depth": 2, 00:12:50.533 "io_size": 3145728, 00:12:50.533 "runtime": 8.637166, 00:12:50.533 "iops": 85.21313588276524, 00:12:50.533 "mibps": 255.63940764829573, 00:12:50.533 "io_failed": 0, 00:12:50.533 "io_timeout": 0, 00:12:50.533 "avg_latency_us": 16745.5289158914, 00:12:50.533 "min_latency_us": 314.80174672489085, 00:12:50.533 "max_latency_us": 114015.46899563319 00:12:50.533 } 00:12:50.533 ], 00:12:50.533 "core_count": 1 00:12:50.533 } 00:12:50.533 [2024-12-12 19:41:33.289663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.533 [2024-12-12 19:41:33.289808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.533 [2024-12-12 19:41:33.289821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.533 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:50.795 /dev/nbd0 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.795 1+0 records in 00:12:50.795 1+0 records out 00:12:50.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058343 s, 7.0 MB/s 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.795 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:51.056 /dev/nbd1 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.056 1+0 records in 00:12:51.056 1+0 records out 00:12:51.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464843 s, 8.8 MB/s 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.056 19:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.316 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.576 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78155 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78155 ']' 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78155 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78155 00:12:51.836 killing process with pid 78155 00:12:51.836 Received shutdown signal, test time was about 9.881266 seconds 00:12:51.836 00:12:51.836 Latency(us) 00:12:51.836 [2024-12-12T19:41:34.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.836 [2024-12-12T19:41:34.681Z] =================================================================================================================== 00:12:51.836 [2024-12-12T19:41:34.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78155' 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78155 00:12:51.836 [2024-12-12 19:41:34.507662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.836 19:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78155 00:12:52.096 [2024-12-12 19:41:34.758903] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:53.475 00:12:53.475 real 0m13.143s 00:12:53.475 user 0m16.006s 00:12:53.475 sys 0m1.605s 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.475 ************************************ 00:12:53.475 END TEST raid_rebuild_test_io 00:12:53.475 ************************************ 00:12:53.475 19:41:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:53.475 19:41:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:53.475 19:41:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.475 19:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.475 ************************************ 00:12:53.475 START TEST raid_rebuild_test_sb_io 00:12:53.475 ************************************ 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78544 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78544 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78544 ']' 00:12:53.475 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.476 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.476 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:53.476 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.476 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.476 19:41:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.476 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.476 Zero copy mechanism will not be used. 00:12:53.476 [2024-12-12 19:41:36.206737] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:53.476 [2024-12-12 19:41:36.206867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78544 ] 00:12:53.735 [2024-12-12 19:41:36.381002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.735 [2024-12-12 19:41:36.518785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.995 [2024-12-12 19:41:36.752512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.995 [2024-12-12 19:41:36.752607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.255 BaseBdev1_malloc 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.255 [2024-12-12 19:41:37.061759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:54.255 [2024-12-12 19:41:37.061904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.255 [2024-12-12 19:41:37.061937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.255 [2024-12-12 19:41:37.061952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.255 [2024-12-12 19:41:37.064463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.255 [2024-12-12 19:41:37.064510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.255 BaseBdev1 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.255 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.256 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.256 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.256 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 BaseBdev2_malloc 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 [2024-12-12 19:41:37.120437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:54.516 [2024-12-12 19:41:37.120598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.516 [2024-12-12 19:41:37.120644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.516 [2024-12-12 19:41:37.120715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.516 [2024-12-12 19:41:37.123245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.516 [2024-12-12 19:41:37.123336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.516 BaseBdev2 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 spare_malloc 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 spare_delay 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 [2024-12-12 19:41:37.198636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.516 [2024-12-12 19:41:37.198760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.516 [2024-12-12 19:41:37.198822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:54.516 [2024-12-12 19:41:37.198869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.516 [2024-12-12 19:41:37.201448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.516 [2024-12-12 19:41:37.201498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.516 spare 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 [2024-12-12 19:41:37.206708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.516 [2024-12-12 19:41:37.208924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.516 [2024-12-12 19:41:37.209221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.516 [2024-12-12 19:41:37.209287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.516 [2024-12-12 19:41:37.209653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:54.516 [2024-12-12 19:41:37.209924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.516 [2024-12-12 19:41:37.209975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.516 [2024-12-12 19:41:37.210237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.516 "name": "raid_bdev1", 00:12:54.516 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:54.516 "strip_size_kb": 0, 00:12:54.516 "state": "online", 00:12:54.516 "raid_level": "raid1", 00:12:54.516 "superblock": true, 00:12:54.516 "num_base_bdevs": 2, 00:12:54.516 "num_base_bdevs_discovered": 2, 00:12:54.516 "num_base_bdevs_operational": 2, 00:12:54.516 "base_bdevs_list": [ 00:12:54.516 { 00:12:54.516 "name": "BaseBdev1", 00:12:54.516 "uuid": "98441427-aebd-55a0-8be4-e229dafb3af4", 00:12:54.516 "is_configured": true, 00:12:54.516 "data_offset": 2048, 00:12:54.516 "data_size": 63488 00:12:54.516 }, 00:12:54.516 { 00:12:54.516 "name": "BaseBdev2", 00:12:54.516 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:54.516 "is_configured": true, 00:12:54.516 "data_offset": 2048, 00:12:54.516 "data_size": 63488 00:12:54.516 } 00:12:54.516 ] 00:12:54.516 }' 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.516 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.776 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.776 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:54.776 [2024-12-12 19:41:37.614384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.036 [2024-12-12 19:41:37.713827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.036 "name": "raid_bdev1", 00:12:55.036 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:55.036 "strip_size_kb": 0, 00:12:55.036 "state": "online", 00:12:55.036 "raid_level": "raid1", 00:12:55.036 "superblock": true, 00:12:55.036 "num_base_bdevs": 2, 00:12:55.036 "num_base_bdevs_discovered": 1, 00:12:55.036 "num_base_bdevs_operational": 1, 00:12:55.036 "base_bdevs_list": [ 00:12:55.036 { 00:12:55.036 "name": null, 00:12:55.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.036 "is_configured": false, 00:12:55.036 "data_offset": 0, 00:12:55.036 "data_size": 63488 00:12:55.036 }, 00:12:55.036 { 00:12:55.036 "name": "BaseBdev2", 00:12:55.036 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:55.036 "is_configured": true, 00:12:55.036 "data_offset": 2048, 00:12:55.036 "data_size": 63488 00:12:55.036 } 00:12:55.036 ] 00:12:55.036 }' 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.036 19:41:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.036 [2024-12-12 19:41:37.802775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:55.036 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.036 Zero copy mechanism will not be used. 00:12:55.036 Running I/O for 60 seconds... 00:12:55.295 19:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.295 19:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.295 19:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.295 [2024-12-12 19:41:38.135985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.554 19:41:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.554 19:41:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:55.555 [2024-12-12 19:41:38.197074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:55.555 [2024-12-12 19:41:38.199462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.555 [2024-12-12 19:41:38.315008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.555 [2024-12-12 19:41:38.316060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.814 [2024-12-12 19:41:38.525579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.814 [2024-12-12 19:41:38.526304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.073 [2024-12-12 19:41:38.760604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:56.073 [2024-12-12 19:41:38.761452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:56.332 184.00 IOPS, 552.00 MiB/s [2024-12-12T19:41:39.177Z] [2024-12-12 19:41:38.977749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.332 [2024-12-12 19:41:38.978291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 [2024-12-12 19:41:39.232533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.591 "name": "raid_bdev1", 00:12:56.591 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:56.591 "strip_size_kb": 0, 00:12:56.591 "state": "online", 00:12:56.591 "raid_level": "raid1", 00:12:56.591 "superblock": true, 00:12:56.591 "num_base_bdevs": 2, 00:12:56.591 "num_base_bdevs_discovered": 2, 00:12:56.591 "num_base_bdevs_operational": 2, 00:12:56.591 "process": { 00:12:56.591 "type": "rebuild", 00:12:56.591 "target": "spare", 00:12:56.591 "progress": { 00:12:56.591 "blocks": 12288, 00:12:56.591 "percent": 19 00:12:56.591 } 00:12:56.591 }, 00:12:56.591 "base_bdevs_list": [ 00:12:56.591 { 00:12:56.591 "name": "spare", 00:12:56.591 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:12:56.591 "is_configured": true, 00:12:56.591 "data_offset": 2048, 00:12:56.591 "data_size": 63488 00:12:56.591 }, 00:12:56.591 { 00:12:56.591 "name": "BaseBdev2", 00:12:56.591 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:56.591 "is_configured": true, 00:12:56.591 "data_offset": 2048, 00:12:56.591 "data_size": 63488 00:12:56.591 } 00:12:56.591 ] 00:12:56.591 }' 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 [2024-12-12 19:41:39.348064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.591 [2024-12-12 19:41:39.348973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:56.591 [2024-12-12 19:41:39.356750] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.591 [2024-12-12 19:41:39.366005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.591 [2024-12-12 19:41:39.366050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.591 [2024-12-12 19:41:39.366079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.591 [2024-12-12 19:41:39.405643] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.850 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.850 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.850 "name": "raid_bdev1", 00:12:56.850 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:56.850 "strip_size_kb": 0, 00:12:56.850 "state": "online", 00:12:56.850 "raid_level": "raid1", 00:12:56.850 "superblock": true, 00:12:56.850 "num_base_bdevs": 2, 00:12:56.850 "num_base_bdevs_discovered": 1, 00:12:56.850 "num_base_bdevs_operational": 1, 00:12:56.850 "base_bdevs_list": [ 00:12:56.850 { 00:12:56.850 "name": null, 00:12:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.850 "is_configured": false, 00:12:56.851 "data_offset": 0, 00:12:56.851 "data_size": 63488 00:12:56.851 }, 00:12:56.851 { 00:12:56.851 "name": "BaseBdev2", 00:12:56.851 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:56.851 "is_configured": true, 00:12:56.851 "data_offset": 2048, 00:12:56.851 "data_size": 63488 00:12:56.851 } 00:12:56.851 ] 00:12:56.851 }' 00:12:56.851 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.851 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 207.50 IOPS, 622.50 MiB/s [2024-12-12T19:41:39.955Z] 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.110 "name": "raid_bdev1", 00:12:57.110 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:57.110 "strip_size_kb": 0, 00:12:57.110 "state": "online", 00:12:57.110 "raid_level": "raid1", 00:12:57.110 "superblock": true, 00:12:57.110 "num_base_bdevs": 2, 00:12:57.110 "num_base_bdevs_discovered": 1, 00:12:57.110 "num_base_bdevs_operational": 1, 00:12:57.110 "base_bdevs_list": [ 00:12:57.110 { 00:12:57.110 "name": null, 00:12:57.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.110 "is_configured": false, 00:12:57.110 "data_offset": 0, 00:12:57.110 "data_size": 63488 00:12:57.110 }, 00:12:57.110 { 00:12:57.110 "name": "BaseBdev2", 00:12:57.110 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:57.110 "is_configured": true, 00:12:57.110 "data_offset": 2048, 00:12:57.110 "data_size": 63488 00:12:57.110 } 00:12:57.110 ] 00:12:57.110 }' 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.110 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.369 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.369 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.369 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.369 19:41:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.369 [2024-12-12 19:41:39.979486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.369 19:41:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.369 19:41:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:57.369 [2024-12-12 19:41:40.043298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:57.369 [2024-12-12 19:41:40.045521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.369 [2024-12-12 19:41:40.163854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.369 [2024-12-12 19:41:40.164693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.627 [2024-12-12 19:41:40.369426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.627 [2024-12-12 19:41:40.369987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.886 [2024-12-12 19:41:40.714557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:58.145 188.33 IOPS, 565.00 MiB/s [2024-12-12T19:41:40.990Z] [2024-12-12 19:41:40.832884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.145 [2024-12-12 19:41:40.833448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.404 "name": "raid_bdev1", 00:12:58.404 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:58.404 "strip_size_kb": 0, 00:12:58.404 "state": "online", 00:12:58.404 "raid_level": "raid1", 00:12:58.404 "superblock": true, 00:12:58.404 "num_base_bdevs": 2, 00:12:58.404 "num_base_bdevs_discovered": 2, 00:12:58.404 "num_base_bdevs_operational": 2, 00:12:58.404 "process": { 00:12:58.404 "type": "rebuild", 00:12:58.404 "target": "spare", 00:12:58.404 "progress": { 00:12:58.404 "blocks": 12288, 00:12:58.404 "percent": 19 00:12:58.404 } 00:12:58.404 }, 00:12:58.404 "base_bdevs_list": [ 00:12:58.404 { 00:12:58.404 "name": "spare", 00:12:58.404 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:12:58.404 "is_configured": true, 00:12:58.404 "data_offset": 2048, 00:12:58.404 "data_size": 63488 00:12:58.404 }, 00:12:58.404 { 00:12:58.404 "name": "BaseBdev2", 00:12:58.404 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:58.404 "is_configured": true, 00:12:58.404 "data_offset": 2048, 00:12:58.404 "data_size": 63488 00:12:58.404 } 00:12:58.404 ] 00:12:58.404 }' 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.404 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:58.405 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=417 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.405 [2024-12-12 19:41:41.193989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.405 [2024-12-12 19:41:41.194657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.405 "name": "raid_bdev1", 00:12:58.405 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:58.405 "strip_size_kb": 0, 00:12:58.405 "state": "online", 00:12:58.405 "raid_level": "raid1", 00:12:58.405 "superblock": true, 00:12:58.405 "num_base_bdevs": 2, 00:12:58.405 "num_base_bdevs_discovered": 2, 00:12:58.405 "num_base_bdevs_operational": 2, 00:12:58.405 "process": { 00:12:58.405 "type": "rebuild", 00:12:58.405 "target": "spare", 00:12:58.405 "progress": { 00:12:58.405 "blocks": 14336, 00:12:58.405 "percent": 22 00:12:58.405 } 00:12:58.405 }, 00:12:58.405 "base_bdevs_list": [ 00:12:58.405 { 00:12:58.405 "name": "spare", 00:12:58.405 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:12:58.405 "is_configured": true, 00:12:58.405 "data_offset": 2048, 00:12:58.405 "data_size": 63488 00:12:58.405 }, 00:12:58.405 { 00:12:58.405 "name": "BaseBdev2", 00:12:58.405 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:58.405 "is_configured": true, 00:12:58.405 "data_offset": 2048, 00:12:58.405 "data_size": 63488 00:12:58.405 } 00:12:58.405 ] 00:12:58.405 }' 00:12:58.405 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.663 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.663 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.663 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.663 19:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.922 [2024-12-12 19:41:41.675935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:59.441 157.75 IOPS, 473.25 MiB/s [2024-12-12T19:41:42.286Z] [2024-12-12 19:41:42.026367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:59.441 [2024-12-12 19:41:42.140454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:59.441 [2024-12-12 19:41:42.140931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.701 "name": "raid_bdev1", 00:12:59.701 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:12:59.701 "strip_size_kb": 0, 00:12:59.701 "state": "online", 00:12:59.701 "raid_level": "raid1", 00:12:59.701 "superblock": true, 00:12:59.701 "num_base_bdevs": 2, 00:12:59.701 "num_base_bdevs_discovered": 2, 00:12:59.701 "num_base_bdevs_operational": 2, 00:12:59.701 "process": { 00:12:59.701 "type": "rebuild", 00:12:59.701 "target": "spare", 00:12:59.701 "progress": { 00:12:59.701 "blocks": 28672, 00:12:59.701 "percent": 45 00:12:59.701 } 00:12:59.701 }, 00:12:59.701 "base_bdevs_list": [ 00:12:59.701 { 00:12:59.701 "name": "spare", 00:12:59.701 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:12:59.701 "is_configured": true, 00:12:59.701 "data_offset": 2048, 00:12:59.701 "data_size": 63488 00:12:59.701 }, 00:12:59.701 { 00:12:59.701 "name": "BaseBdev2", 00:12:59.701 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:12:59.701 "is_configured": true, 00:12:59.701 "data_offset": 2048, 00:12:59.701 "data_size": 63488 00:12:59.701 } 00:12:59.701 ] 00:12:59.701 }' 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.701 19:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.701 [2024-12-12 19:41:42.478873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:59.961 [2024-12-12 19:41:42.596968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:59.961 [2024-12-12 19:41:42.597650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:00.227 136.20 IOPS, 408.60 MiB/s [2024-12-12T19:41:43.072Z] [2024-12-12 19:41:42.831800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:00.227 [2024-12-12 19:41:42.832687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:00.227 [2024-12-12 19:41:43.059880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.804 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.805 "name": "raid_bdev1", 00:13:00.805 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:00.805 "strip_size_kb": 0, 00:13:00.805 "state": "online", 00:13:00.805 "raid_level": "raid1", 00:13:00.805 "superblock": true, 00:13:00.805 "num_base_bdevs": 2, 00:13:00.805 "num_base_bdevs_discovered": 2, 00:13:00.805 "num_base_bdevs_operational": 2, 00:13:00.805 "process": { 00:13:00.805 "type": "rebuild", 00:13:00.805 "target": "spare", 00:13:00.805 "progress": { 00:13:00.805 "blocks": 45056, 00:13:00.805 "percent": 70 00:13:00.805 } 00:13:00.805 }, 00:13:00.805 "base_bdevs_list": [ 00:13:00.805 { 00:13:00.805 "name": "spare", 00:13:00.805 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:00.805 "is_configured": true, 00:13:00.805 "data_offset": 2048, 00:13:00.805 "data_size": 63488 00:13:00.805 }, 00:13:00.805 { 00:13:00.805 "name": "BaseBdev2", 00:13:00.805 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:00.805 "is_configured": true, 00:13:00.805 "data_offset": 2048, 00:13:00.805 "data_size": 63488 00:13:00.805 } 00:13:00.805 ] 00:13:00.805 }' 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.805 19:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.064 [2024-12-12 19:41:43.725271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:01.632 119.83 IOPS, 359.50 MiB/s [2024-12-12T19:41:44.477Z] [2024-12-12 19:41:44.385140] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:01.892 [2024-12-12 19:41:44.490022] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:01.892 [2024-12-12 19:41:44.494278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.892 "name": "raid_bdev1", 00:13:01.892 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:01.892 "strip_size_kb": 0, 00:13:01.892 "state": "online", 00:13:01.892 "raid_level": "raid1", 00:13:01.892 "superblock": true, 00:13:01.892 "num_base_bdevs": 2, 00:13:01.892 "num_base_bdevs_discovered": 2, 00:13:01.892 "num_base_bdevs_operational": 2, 00:13:01.892 "base_bdevs_list": [ 00:13:01.892 { 00:13:01.892 "name": "spare", 00:13:01.892 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:01.892 "is_configured": true, 00:13:01.892 "data_offset": 2048, 00:13:01.892 "data_size": 63488 00:13:01.892 }, 00:13:01.892 { 00:13:01.892 "name": "BaseBdev2", 00:13:01.892 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:01.892 "is_configured": true, 00:13:01.892 "data_offset": 2048, 00:13:01.892 "data_size": 63488 00:13:01.892 } 00:13:01.892 ] 00:13:01.892 }' 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.892 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.151 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.151 "name": "raid_bdev1", 00:13:02.152 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:02.152 "strip_size_kb": 0, 00:13:02.152 "state": "online", 00:13:02.152 "raid_level": "raid1", 00:13:02.152 "superblock": true, 00:13:02.152 "num_base_bdevs": 2, 00:13:02.152 "num_base_bdevs_discovered": 2, 00:13:02.152 "num_base_bdevs_operational": 2, 00:13:02.152 "base_bdevs_list": [ 00:13:02.152 { 00:13:02.152 "name": "spare", 00:13:02.152 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:02.152 "is_configured": true, 00:13:02.152 "data_offset": 2048, 00:13:02.152 "data_size": 63488 00:13:02.152 }, 00:13:02.152 { 00:13:02.152 "name": "BaseBdev2", 00:13:02.152 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:02.152 "is_configured": true, 00:13:02.152 "data_offset": 2048, 00:13:02.152 "data_size": 63488 00:13:02.152 } 00:13:02.152 ] 00:13:02.152 }' 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.152 106.86 IOPS, 320.57 MiB/s [2024-12-12T19:41:44.997Z] 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.152 "name": "raid_bdev1", 00:13:02.152 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:02.152 "strip_size_kb": 0, 00:13:02.152 "state": "online", 00:13:02.152 "raid_level": "raid1", 00:13:02.152 "superblock": true, 00:13:02.152 "num_base_bdevs": 2, 00:13:02.152 "num_base_bdevs_discovered": 2, 00:13:02.152 "num_base_bdevs_operational": 2, 00:13:02.152 "base_bdevs_list": [ 00:13:02.152 { 00:13:02.152 "name": "spare", 00:13:02.152 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:02.152 "is_configured": true, 00:13:02.152 "data_offset": 2048, 00:13:02.152 "data_size": 63488 00:13:02.152 }, 00:13:02.152 { 00:13:02.152 "name": "BaseBdev2", 00:13:02.152 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:02.152 "is_configured": true, 00:13:02.152 "data_offset": 2048, 00:13:02.152 "data_size": 63488 00:13:02.152 } 00:13:02.152 ] 00:13:02.152 }' 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.152 19:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 [2024-12-12 19:41:45.303298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.721 [2024-12-12 19:41:45.303419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.721 00:13:02.721 Latency(us) 00:13:02.721 [2024-12-12T19:41:45.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.721 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:02.721 raid_bdev1 : 7.57 101.16 303.47 0.00 0.00 13197.64 321.96 114473.36 00:13:02.721 [2024-12-12T19:41:45.566Z] =================================================================================================================== 00:13:02.721 [2024-12-12T19:41:45.566Z] Total : 101.16 303.47 0.00 0.00 13197.64 321.96 114473.36 00:13:02.721 [2024-12-12 19:41:45.384951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.721 [2024-12-12 19:41:45.385038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.721 [2024-12-12 19:41:45.385129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.721 [2024-12-12 19:41:45.385141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:02.721 { 00:13:02.721 "results": [ 00:13:02.721 { 00:13:02.721 "job": "raid_bdev1", 00:13:02.721 "core_mask": "0x1", 00:13:02.721 "workload": "randrw", 00:13:02.721 "percentage": 50, 00:13:02.721 "status": "finished", 00:13:02.721 "queue_depth": 2, 00:13:02.721 "io_size": 3145728, 00:13:02.721 "runtime": 7.572434, 00:13:02.721 "iops": 101.15637851713201, 00:13:02.721 "mibps": 303.46913555139605, 00:13:02.721 "io_failed": 0, 00:13:02.721 "io_timeout": 0, 00:13:02.721 "avg_latency_us": 13197.635217257459, 00:13:02.721 "min_latency_us": 321.95633187772927, 00:13:02.721 "max_latency_us": 114473.36244541485 00:13:02.721 } 00:13:02.721 ], 00:13:02.721 "core_count": 1 00:13:02.721 } 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.721 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:02.980 /dev/nbd0 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.980 1+0 records in 00:13:02.980 1+0 records out 00:13:02.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380604 s, 10.8 MB/s 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:02.980 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.981 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:03.240 /dev/nbd1 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.240 1+0 records in 00:13:03.240 1+0 records out 00:13:03.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285914 s, 14.3 MB/s 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.240 19:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:03.499 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:03.499 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.500 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:03.500 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.500 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.500 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.500 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.759 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.019 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.019 [2024-12-12 19:41:46.648123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.019 [2024-12-12 19:41:46.648196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.019 [2024-12-12 19:41:46.648225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:04.019 [2024-12-12 19:41:46.648237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.019 [2024-12-12 19:41:46.650818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.020 [2024-12-12 19:41:46.650923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.020 [2024-12-12 19:41:46.651049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.020 [2024-12-12 19:41:46.651121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.020 [2024-12-12 19:41:46.651299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.020 spare 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.020 [2024-12-12 19:41:46.751220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:04.020 [2024-12-12 19:41:46.751335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.020 [2024-12-12 19:41:46.751750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:04.020 [2024-12-12 19:41:46.752027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:04.020 [2024-12-12 19:41:46.752080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:04.020 [2024-12-12 19:41:46.752376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.020 "name": "raid_bdev1", 00:13:04.020 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:04.020 "strip_size_kb": 0, 00:13:04.020 "state": "online", 00:13:04.020 "raid_level": "raid1", 00:13:04.020 "superblock": true, 00:13:04.020 "num_base_bdevs": 2, 00:13:04.020 "num_base_bdevs_discovered": 2, 00:13:04.020 "num_base_bdevs_operational": 2, 00:13:04.020 "base_bdevs_list": [ 00:13:04.020 { 00:13:04.020 "name": "spare", 00:13:04.020 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:04.020 "is_configured": true, 00:13:04.020 "data_offset": 2048, 00:13:04.020 "data_size": 63488 00:13:04.020 }, 00:13:04.020 { 00:13:04.020 "name": "BaseBdev2", 00:13:04.020 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:04.020 "is_configured": true, 00:13:04.020 "data_offset": 2048, 00:13:04.020 "data_size": 63488 00:13:04.020 } 00:13:04.020 ] 00:13:04.020 }' 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.020 19:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.589 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.589 "name": "raid_bdev1", 00:13:04.589 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:04.589 "strip_size_kb": 0, 00:13:04.589 "state": "online", 00:13:04.589 "raid_level": "raid1", 00:13:04.589 "superblock": true, 00:13:04.589 "num_base_bdevs": 2, 00:13:04.589 "num_base_bdevs_discovered": 2, 00:13:04.589 "num_base_bdevs_operational": 2, 00:13:04.589 "base_bdevs_list": [ 00:13:04.589 { 00:13:04.589 "name": "spare", 00:13:04.589 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:04.589 "is_configured": true, 00:13:04.589 "data_offset": 2048, 00:13:04.589 "data_size": 63488 00:13:04.589 }, 00:13:04.590 { 00:13:04.590 "name": "BaseBdev2", 00:13:04.590 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:04.590 "is_configured": true, 00:13:04.590 "data_offset": 2048, 00:13:04.590 "data_size": 63488 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.590 [2024-12-12 19:41:47.347477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.590 "name": "raid_bdev1", 00:13:04.590 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:04.590 "strip_size_kb": 0, 00:13:04.590 "state": "online", 00:13:04.590 "raid_level": "raid1", 00:13:04.590 "superblock": true, 00:13:04.590 "num_base_bdevs": 2, 00:13:04.590 "num_base_bdevs_discovered": 1, 00:13:04.590 "num_base_bdevs_operational": 1, 00:13:04.590 "base_bdevs_list": [ 00:13:04.590 { 00:13:04.590 "name": null, 00:13:04.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.590 "is_configured": false, 00:13:04.590 "data_offset": 0, 00:13:04.590 "data_size": 63488 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "name": "BaseBdev2", 00:13:04.590 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:04.590 "is_configured": true, 00:13:04.590 "data_offset": 2048, 00:13:04.590 "data_size": 63488 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }' 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.590 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.160 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.160 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.160 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.160 [2024-12-12 19:41:47.786827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.160 [2024-12-12 19:41:47.787117] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:05.160 [2024-12-12 19:41:47.787177] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:05.160 [2024-12-12 19:41:47.787233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.160 [2024-12-12 19:41:47.806238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:05.160 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.160 19:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:05.160 [2024-12-12 19:41:47.808369] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.100 "name": "raid_bdev1", 00:13:06.100 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:06.100 "strip_size_kb": 0, 00:13:06.100 "state": "online", 00:13:06.100 "raid_level": "raid1", 00:13:06.100 "superblock": true, 00:13:06.100 "num_base_bdevs": 2, 00:13:06.100 "num_base_bdevs_discovered": 2, 00:13:06.100 "num_base_bdevs_operational": 2, 00:13:06.100 "process": { 00:13:06.100 "type": "rebuild", 00:13:06.100 "target": "spare", 00:13:06.100 "progress": { 00:13:06.100 "blocks": 20480, 00:13:06.100 "percent": 32 00:13:06.100 } 00:13:06.100 }, 00:13:06.100 "base_bdevs_list": [ 00:13:06.100 { 00:13:06.100 "name": "spare", 00:13:06.100 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:06.100 "is_configured": true, 00:13:06.100 "data_offset": 2048, 00:13:06.100 "data_size": 63488 00:13:06.100 }, 00:13:06.100 { 00:13:06.100 "name": "BaseBdev2", 00:13:06.100 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:06.100 "is_configured": true, 00:13:06.100 "data_offset": 2048, 00:13:06.100 "data_size": 63488 00:13:06.100 } 00:13:06.100 ] 00:13:06.100 }' 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.100 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.360 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.360 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:06.360 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.360 19:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.360 [2024-12-12 19:41:48.967390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.360 [2024-12-12 19:41:49.014489] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.360 [2024-12-12 19:41:49.014575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.360 [2024-12-12 19:41:49.014595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.360 [2024-12-12 19:41:49.014607] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.360 "name": "raid_bdev1", 00:13:06.360 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:06.360 "strip_size_kb": 0, 00:13:06.360 "state": "online", 00:13:06.360 "raid_level": "raid1", 00:13:06.360 "superblock": true, 00:13:06.360 "num_base_bdevs": 2, 00:13:06.360 "num_base_bdevs_discovered": 1, 00:13:06.360 "num_base_bdevs_operational": 1, 00:13:06.360 "base_bdevs_list": [ 00:13:06.360 { 00:13:06.360 "name": null, 00:13:06.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.360 "is_configured": false, 00:13:06.360 "data_offset": 0, 00:13:06.360 "data_size": 63488 00:13:06.360 }, 00:13:06.360 { 00:13:06.360 "name": "BaseBdev2", 00:13:06.360 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:06.360 "is_configured": true, 00:13:06.360 "data_offset": 2048, 00:13:06.360 "data_size": 63488 00:13:06.360 } 00:13:06.360 ] 00:13:06.360 }' 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.360 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.929 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.929 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.929 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.929 [2024-12-12 19:41:49.525084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.929 [2024-12-12 19:41:49.525230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.929 [2024-12-12 19:41:49.525293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:06.929 [2024-12-12 19:41:49.525334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.929 [2024-12-12 19:41:49.525863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.929 [2024-12-12 19:41:49.525937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.929 [2024-12-12 19:41:49.526075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:06.929 [2024-12-12 19:41:49.526139] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:06.929 [2024-12-12 19:41:49.526190] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:06.929 [2024-12-12 19:41:49.526253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.929 [2024-12-12 19:41:49.542421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:06.929 spare 00:13:06.929 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.929 [2024-12-12 19:41:49.544429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.929 19:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.869 "name": "raid_bdev1", 00:13:07.869 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:07.869 "strip_size_kb": 0, 00:13:07.869 "state": "online", 00:13:07.869 "raid_level": "raid1", 00:13:07.869 "superblock": true, 00:13:07.869 "num_base_bdevs": 2, 00:13:07.869 "num_base_bdevs_discovered": 2, 00:13:07.869 "num_base_bdevs_operational": 2, 00:13:07.869 "process": { 00:13:07.869 "type": "rebuild", 00:13:07.869 "target": "spare", 00:13:07.869 "progress": { 00:13:07.869 "blocks": 20480, 00:13:07.869 "percent": 32 00:13:07.869 } 00:13:07.869 }, 00:13:07.869 "base_bdevs_list": [ 00:13:07.869 { 00:13:07.869 "name": "spare", 00:13:07.869 "uuid": "ce8c786f-0786-54d8-9b3c-c7842db609b7", 00:13:07.869 "is_configured": true, 00:13:07.869 "data_offset": 2048, 00:13:07.869 "data_size": 63488 00:13:07.869 }, 00:13:07.869 { 00:13:07.869 "name": "BaseBdev2", 00:13:07.869 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:07.869 "is_configured": true, 00:13:07.869 "data_offset": 2048, 00:13:07.869 "data_size": 63488 00:13:07.869 } 00:13:07.869 ] 00:13:07.869 }' 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.869 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.869 [2024-12-12 19:41:50.688193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.129 [2024-12-12 19:41:50.750608] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.129 [2024-12-12 19:41:50.750790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.129 [2024-12-12 19:41:50.750811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.129 [2024-12-12 19:41:50.750819] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.129 "name": "raid_bdev1", 00:13:08.129 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:08.129 "strip_size_kb": 0, 00:13:08.129 "state": "online", 00:13:08.129 "raid_level": "raid1", 00:13:08.129 "superblock": true, 00:13:08.129 "num_base_bdevs": 2, 00:13:08.129 "num_base_bdevs_discovered": 1, 00:13:08.129 "num_base_bdevs_operational": 1, 00:13:08.129 "base_bdevs_list": [ 00:13:08.129 { 00:13:08.129 "name": null, 00:13:08.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.129 "is_configured": false, 00:13:08.129 "data_offset": 0, 00:13:08.129 "data_size": 63488 00:13:08.129 }, 00:13:08.129 { 00:13:08.129 "name": "BaseBdev2", 00:13:08.129 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:08.129 "is_configured": true, 00:13:08.129 "data_offset": 2048, 00:13:08.129 "data_size": 63488 00:13:08.129 } 00:13:08.129 ] 00:13:08.129 }' 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.129 19:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.389 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.649 "name": "raid_bdev1", 00:13:08.649 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:08.649 "strip_size_kb": 0, 00:13:08.649 "state": "online", 00:13:08.649 "raid_level": "raid1", 00:13:08.649 "superblock": true, 00:13:08.649 "num_base_bdevs": 2, 00:13:08.649 "num_base_bdevs_discovered": 1, 00:13:08.649 "num_base_bdevs_operational": 1, 00:13:08.649 "base_bdevs_list": [ 00:13:08.649 { 00:13:08.649 "name": null, 00:13:08.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.649 "is_configured": false, 00:13:08.649 "data_offset": 0, 00:13:08.649 "data_size": 63488 00:13:08.649 }, 00:13:08.649 { 00:13:08.649 "name": "BaseBdev2", 00:13:08.649 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:08.649 "is_configured": true, 00:13:08.649 "data_offset": 2048, 00:13:08.649 "data_size": 63488 00:13:08.649 } 00:13:08.649 ] 00:13:08.649 }' 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.649 [2024-12-12 19:41:51.352977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.649 [2024-12-12 19:41:51.353066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.649 [2024-12-12 19:41:51.353120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:08.649 [2024-12-12 19:41:51.353159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.649 [2024-12-12 19:41:51.353657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.649 [2024-12-12 19:41:51.353679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.649 [2024-12-12 19:41:51.353759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:08.649 [2024-12-12 19:41:51.353772] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:08.649 [2024-12-12 19:41:51.353781] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:08.649 [2024-12-12 19:41:51.353790] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:08.649 BaseBdev1 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.649 19:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.589 "name": "raid_bdev1", 00:13:09.589 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:09.589 "strip_size_kb": 0, 00:13:09.589 "state": "online", 00:13:09.589 "raid_level": "raid1", 00:13:09.589 "superblock": true, 00:13:09.589 "num_base_bdevs": 2, 00:13:09.589 "num_base_bdevs_discovered": 1, 00:13:09.589 "num_base_bdevs_operational": 1, 00:13:09.589 "base_bdevs_list": [ 00:13:09.589 { 00:13:09.589 "name": null, 00:13:09.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.589 "is_configured": false, 00:13:09.589 "data_offset": 0, 00:13:09.589 "data_size": 63488 00:13:09.589 }, 00:13:09.589 { 00:13:09.589 "name": "BaseBdev2", 00:13:09.589 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:09.589 "is_configured": true, 00:13:09.589 "data_offset": 2048, 00:13:09.589 "data_size": 63488 00:13:09.589 } 00:13:09.589 ] 00:13:09.589 }' 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.589 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.170 "name": "raid_bdev1", 00:13:10.170 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:10.170 "strip_size_kb": 0, 00:13:10.170 "state": "online", 00:13:10.170 "raid_level": "raid1", 00:13:10.170 "superblock": true, 00:13:10.170 "num_base_bdevs": 2, 00:13:10.170 "num_base_bdevs_discovered": 1, 00:13:10.170 "num_base_bdevs_operational": 1, 00:13:10.170 "base_bdevs_list": [ 00:13:10.170 { 00:13:10.170 "name": null, 00:13:10.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.170 "is_configured": false, 00:13:10.170 "data_offset": 0, 00:13:10.170 "data_size": 63488 00:13:10.170 }, 00:13:10.170 { 00:13:10.170 "name": "BaseBdev2", 00:13:10.170 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:10.170 "is_configured": true, 00:13:10.170 "data_offset": 2048, 00:13:10.170 "data_size": 63488 00:13:10.170 } 00:13:10.170 ] 00:13:10.170 }' 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.170 [2024-12-12 19:41:52.918600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.170 [2024-12-12 19:41:52.918776] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.170 [2024-12-12 19:41:52.918794] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:10.170 request: 00:13:10.170 { 00:13:10.170 "base_bdev": "BaseBdev1", 00:13:10.170 "raid_bdev": "raid_bdev1", 00:13:10.170 "method": "bdev_raid_add_base_bdev", 00:13:10.170 "req_id": 1 00:13:10.170 } 00:13:10.170 Got JSON-RPC error response 00:13:10.170 response: 00:13:10.170 { 00:13:10.170 "code": -22, 00:13:10.170 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:10.170 } 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.170 19:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.108 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.367 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.367 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.367 "name": "raid_bdev1", 00:13:11.367 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:11.367 "strip_size_kb": 0, 00:13:11.367 "state": "online", 00:13:11.367 "raid_level": "raid1", 00:13:11.367 "superblock": true, 00:13:11.367 "num_base_bdevs": 2, 00:13:11.367 "num_base_bdevs_discovered": 1, 00:13:11.367 "num_base_bdevs_operational": 1, 00:13:11.367 "base_bdevs_list": [ 00:13:11.367 { 00:13:11.367 "name": null, 00:13:11.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.367 "is_configured": false, 00:13:11.367 "data_offset": 0, 00:13:11.367 "data_size": 63488 00:13:11.367 }, 00:13:11.367 { 00:13:11.367 "name": "BaseBdev2", 00:13:11.367 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:11.367 "is_configured": true, 00:13:11.367 "data_offset": 2048, 00:13:11.367 "data_size": 63488 00:13:11.367 } 00:13:11.367 ] 00:13:11.367 }' 00:13:11.367 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.367 19:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.627 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.628 "name": "raid_bdev1", 00:13:11.628 "uuid": "1c392354-108e-43d4-9952-b9615c3d57b8", 00:13:11.628 "strip_size_kb": 0, 00:13:11.628 "state": "online", 00:13:11.628 "raid_level": "raid1", 00:13:11.628 "superblock": true, 00:13:11.628 "num_base_bdevs": 2, 00:13:11.628 "num_base_bdevs_discovered": 1, 00:13:11.628 "num_base_bdevs_operational": 1, 00:13:11.628 "base_bdevs_list": [ 00:13:11.628 { 00:13:11.628 "name": null, 00:13:11.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.628 "is_configured": false, 00:13:11.628 "data_offset": 0, 00:13:11.628 "data_size": 63488 00:13:11.628 }, 00:13:11.628 { 00:13:11.628 "name": "BaseBdev2", 00:13:11.628 "uuid": "bd76ed68-5c86-5a33-9937-4924f3870cf7", 00:13:11.628 "is_configured": true, 00:13:11.628 "data_offset": 2048, 00:13:11.628 "data_size": 63488 00:13:11.628 } 00:13:11.628 ] 00:13:11.628 }' 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.628 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78544 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78544 ']' 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78544 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78544 00:13:11.888 killing process with pid 78544 00:13:11.888 Received shutdown signal, test time was about 16.735151 seconds 00:13:11.888 00:13:11.888 Latency(us) 00:13:11.888 [2024-12-12T19:41:54.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.888 [2024-12-12T19:41:54.733Z] =================================================================================================================== 00:13:11.888 [2024-12-12T19:41:54.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78544' 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78544 00:13:11.888 [2024-12-12 19:41:54.508134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.888 19:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78544 00:13:11.888 [2024-12-12 19:41:54.508263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.888 [2024-12-12 19:41:54.508334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.888 [2024-12-12 19:41:54.508350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:12.147 [2024-12-12 19:41:54.744244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.088 19:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:13.088 00:13:13.088 real 0m19.833s 00:13:13.088 user 0m25.537s 00:13:13.088 sys 0m2.378s 00:13:13.088 19:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.088 19:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.088 ************************************ 00:13:13.088 END TEST raid_rebuild_test_sb_io 00:13:13.088 ************************************ 00:13:13.348 19:41:55 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:13.348 19:41:55 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:13.348 19:41:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:13.348 19:41:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.348 19:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.348 ************************************ 00:13:13.348 START TEST raid_rebuild_test 00:13:13.348 ************************************ 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:13.348 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79237 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79237 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79237 ']' 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.349 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.349 [2024-12-12 19:41:56.118481] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:13.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:13.349 Zero copy mechanism will not be used. 00:13:13.349 [2024-12-12 19:41:56.118725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79237 ] 00:13:13.609 [2024-12-12 19:41:56.297855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.609 [2024-12-12 19:41:56.410913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.869 [2024-12-12 19:41:56.611438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.869 [2024-12-12 19:41:56.611498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.128 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.388 BaseBdev1_malloc 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.388 [2024-12-12 19:41:56.978262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.388 [2024-12-12 19:41:56.978338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.388 [2024-12-12 19:41:56.978361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:14.388 [2024-12-12 19:41:56.978373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.388 [2024-12-12 19:41:56.980463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.388 [2024-12-12 19:41:56.980506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.388 BaseBdev1 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.388 19:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.388 BaseBdev2_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 [2024-12-12 19:41:57.031037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:14.389 [2024-12-12 19:41:57.031189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.389 [2024-12-12 19:41:57.031226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:14.389 [2024-12-12 19:41:57.031259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.389 [2024-12-12 19:41:57.033322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.389 [2024-12-12 19:41:57.033406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:14.389 BaseBdev2 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 BaseBdev3_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 [2024-12-12 19:41:57.096398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:14.389 [2024-12-12 19:41:57.096534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.389 [2024-12-12 19:41:57.096584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:14.389 [2024-12-12 19:41:57.096617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.389 [2024-12-12 19:41:57.098760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.389 [2024-12-12 19:41:57.098851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:14.389 BaseBdev3 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 BaseBdev4_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 [2024-12-12 19:41:57.151552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:14.389 [2024-12-12 19:41:57.151612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.389 [2024-12-12 19:41:57.151631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:14.389 [2024-12-12 19:41:57.151642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.389 [2024-12-12 19:41:57.153884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.389 [2024-12-12 19:41:57.153925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:14.389 BaseBdev4 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 spare_malloc 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 spare_delay 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 [2024-12-12 19:41:57.216276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.389 [2024-12-12 19:41:57.216331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.389 [2024-12-12 19:41:57.216348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:14.389 [2024-12-12 19:41:57.216358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.389 [2024-12-12 19:41:57.218377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.389 [2024-12-12 19:41:57.218507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.389 spare 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.389 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 [2024-12-12 19:41:57.228330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.389 [2024-12-12 19:41:57.230392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.389 [2024-12-12 19:41:57.230508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.389 [2024-12-12 19:41:57.230648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.389 [2024-12-12 19:41:57.230802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:14.389 [2024-12-12 19:41:57.230856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.389 [2024-12-12 19:41:57.231167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:14.389 [2024-12-12 19:41:57.231403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:14.389 [2024-12-12 19:41:57.231451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:14.389 [2024-12-12 19:41:57.231675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.648 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.648 "name": "raid_bdev1", 00:13:14.648 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:14.648 "strip_size_kb": 0, 00:13:14.648 "state": "online", 00:13:14.648 "raid_level": "raid1", 00:13:14.648 "superblock": false, 00:13:14.648 "num_base_bdevs": 4, 00:13:14.648 "num_base_bdevs_discovered": 4, 00:13:14.648 "num_base_bdevs_operational": 4, 00:13:14.648 "base_bdevs_list": [ 00:13:14.648 { 00:13:14.648 "name": "BaseBdev1", 00:13:14.648 "uuid": "22fdec35-fbef-5d38-82f5-ab3435eaad95", 00:13:14.648 "is_configured": true, 00:13:14.648 "data_offset": 0, 00:13:14.648 "data_size": 65536 00:13:14.649 }, 00:13:14.649 { 00:13:14.649 "name": "BaseBdev2", 00:13:14.649 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:14.649 "is_configured": true, 00:13:14.649 "data_offset": 0, 00:13:14.649 "data_size": 65536 00:13:14.649 }, 00:13:14.649 { 00:13:14.649 "name": "BaseBdev3", 00:13:14.649 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:14.649 "is_configured": true, 00:13:14.649 "data_offset": 0, 00:13:14.649 "data_size": 65536 00:13:14.649 }, 00:13:14.649 { 00:13:14.649 "name": "BaseBdev4", 00:13:14.649 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:14.649 "is_configured": true, 00:13:14.649 "data_offset": 0, 00:13:14.649 "data_size": 65536 00:13:14.649 } 00:13:14.649 ] 00:13:14.649 }' 00:13:14.649 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.649 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.908 [2024-12-12 19:41:57.663898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.908 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:15.167 [2024-12-12 19:41:57.915192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:15.167 /dev/nbd0 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.167 1+0 records in 00:13:15.167 1+0 records out 00:13:15.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556519 s, 7.4 MB/s 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:15.167 19:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:21.737 65536+0 records in 00:13:21.737 65536+0 records out 00:13:21.737 33554432 bytes (34 MB, 32 MiB) copied, 5.80536 s, 5.8 MB/s 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:21.737 [2024-12-12 19:42:03.974587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.737 19:42:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 [2024-12-12 19:42:04.007795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.737 "name": "raid_bdev1", 00:13:21.737 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:21.737 "strip_size_kb": 0, 00:13:21.737 "state": "online", 00:13:21.737 "raid_level": "raid1", 00:13:21.737 "superblock": false, 00:13:21.737 "num_base_bdevs": 4, 00:13:21.737 "num_base_bdevs_discovered": 3, 00:13:21.737 "num_base_bdevs_operational": 3, 00:13:21.737 "base_bdevs_list": [ 00:13:21.737 { 00:13:21.737 "name": null, 00:13:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.737 "is_configured": false, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 65536 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev2", 00:13:21.737 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:21.737 "is_configured": true, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 65536 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev3", 00:13:21.737 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:21.737 "is_configured": true, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 65536 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev4", 00:13:21.737 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:21.737 "is_configured": true, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 65536 00:13:21.737 } 00:13:21.737 ] 00:13:21.737 }' 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 [2024-12-12 19:42:04.443021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.737 [2024-12-12 19:42:04.456700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.737 19:42:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.737 [2024-12-12 19:42:04.458519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.676 "name": "raid_bdev1", 00:13:22.676 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:22.676 "strip_size_kb": 0, 00:13:22.676 "state": "online", 00:13:22.676 "raid_level": "raid1", 00:13:22.676 "superblock": false, 00:13:22.676 "num_base_bdevs": 4, 00:13:22.676 "num_base_bdevs_discovered": 4, 00:13:22.676 "num_base_bdevs_operational": 4, 00:13:22.676 "process": { 00:13:22.676 "type": "rebuild", 00:13:22.676 "target": "spare", 00:13:22.676 "progress": { 00:13:22.676 "blocks": 20480, 00:13:22.676 "percent": 31 00:13:22.676 } 00:13:22.676 }, 00:13:22.676 "base_bdevs_list": [ 00:13:22.676 { 00:13:22.676 "name": "spare", 00:13:22.676 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:22.676 "is_configured": true, 00:13:22.676 "data_offset": 0, 00:13:22.676 "data_size": 65536 00:13:22.676 }, 00:13:22.676 { 00:13:22.676 "name": "BaseBdev2", 00:13:22.676 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:22.676 "is_configured": true, 00:13:22.676 "data_offset": 0, 00:13:22.676 "data_size": 65536 00:13:22.676 }, 00:13:22.676 { 00:13:22.676 "name": "BaseBdev3", 00:13:22.676 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:22.676 "is_configured": true, 00:13:22.676 "data_offset": 0, 00:13:22.676 "data_size": 65536 00:13:22.676 }, 00:13:22.676 { 00:13:22.676 "name": "BaseBdev4", 00:13:22.676 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:22.676 "is_configured": true, 00:13:22.676 "data_offset": 0, 00:13:22.676 "data_size": 65536 00:13:22.676 } 00:13:22.676 ] 00:13:22.676 }' 00:13:22.676 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.936 [2024-12-12 19:42:05.618247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.936 [2024-12-12 19:42:05.663651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.936 [2024-12-12 19:42:05.663713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.936 [2024-12-12 19:42:05.663730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.936 [2024-12-12 19:42:05.663739] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.936 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.936 "name": "raid_bdev1", 00:13:22.936 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:22.936 "strip_size_kb": 0, 00:13:22.936 "state": "online", 00:13:22.936 "raid_level": "raid1", 00:13:22.936 "superblock": false, 00:13:22.936 "num_base_bdevs": 4, 00:13:22.936 "num_base_bdevs_discovered": 3, 00:13:22.936 "num_base_bdevs_operational": 3, 00:13:22.936 "base_bdevs_list": [ 00:13:22.936 { 00:13:22.936 "name": null, 00:13:22.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.936 "is_configured": false, 00:13:22.936 "data_offset": 0, 00:13:22.936 "data_size": 65536 00:13:22.936 }, 00:13:22.936 { 00:13:22.936 "name": "BaseBdev2", 00:13:22.936 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:22.936 "is_configured": true, 00:13:22.936 "data_offset": 0, 00:13:22.936 "data_size": 65536 00:13:22.936 }, 00:13:22.936 { 00:13:22.937 "name": "BaseBdev3", 00:13:22.937 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:22.937 "is_configured": true, 00:13:22.937 "data_offset": 0, 00:13:22.937 "data_size": 65536 00:13:22.937 }, 00:13:22.937 { 00:13:22.937 "name": "BaseBdev4", 00:13:22.937 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:22.937 "is_configured": true, 00:13:22.937 "data_offset": 0, 00:13:22.937 "data_size": 65536 00:13:22.937 } 00:13:22.937 ] 00:13:22.937 }' 00:13:22.937 19:42:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.937 19:42:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.506 "name": "raid_bdev1", 00:13:23.506 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:23.506 "strip_size_kb": 0, 00:13:23.506 "state": "online", 00:13:23.506 "raid_level": "raid1", 00:13:23.506 "superblock": false, 00:13:23.506 "num_base_bdevs": 4, 00:13:23.506 "num_base_bdevs_discovered": 3, 00:13:23.506 "num_base_bdevs_operational": 3, 00:13:23.506 "base_bdevs_list": [ 00:13:23.506 { 00:13:23.506 "name": null, 00:13:23.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.506 "is_configured": false, 00:13:23.506 "data_offset": 0, 00:13:23.506 "data_size": 65536 00:13:23.506 }, 00:13:23.506 { 00:13:23.506 "name": "BaseBdev2", 00:13:23.506 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:23.506 "is_configured": true, 00:13:23.506 "data_offset": 0, 00:13:23.506 "data_size": 65536 00:13:23.506 }, 00:13:23.506 { 00:13:23.506 "name": "BaseBdev3", 00:13:23.506 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:23.506 "is_configured": true, 00:13:23.506 "data_offset": 0, 00:13:23.506 "data_size": 65536 00:13:23.506 }, 00:13:23.506 { 00:13:23.506 "name": "BaseBdev4", 00:13:23.506 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:23.506 "is_configured": true, 00:13:23.506 "data_offset": 0, 00:13:23.506 "data_size": 65536 00:13:23.506 } 00:13:23.506 ] 00:13:23.506 }' 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 [2024-12-12 19:42:06.252475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.506 [2024-12-12 19:42:06.266566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.506 [2024-12-12 19:42:06.268452] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.506 19:42:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.446 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.707 "name": "raid_bdev1", 00:13:24.707 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:24.707 "strip_size_kb": 0, 00:13:24.707 "state": "online", 00:13:24.707 "raid_level": "raid1", 00:13:24.707 "superblock": false, 00:13:24.707 "num_base_bdevs": 4, 00:13:24.707 "num_base_bdevs_discovered": 4, 00:13:24.707 "num_base_bdevs_operational": 4, 00:13:24.707 "process": { 00:13:24.707 "type": "rebuild", 00:13:24.707 "target": "spare", 00:13:24.707 "progress": { 00:13:24.707 "blocks": 20480, 00:13:24.707 "percent": 31 00:13:24.707 } 00:13:24.707 }, 00:13:24.707 "base_bdevs_list": [ 00:13:24.707 { 00:13:24.707 "name": "spare", 00:13:24.707 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:24.707 "is_configured": true, 00:13:24.707 "data_offset": 0, 00:13:24.707 "data_size": 65536 00:13:24.707 }, 00:13:24.707 { 00:13:24.707 "name": "BaseBdev2", 00:13:24.707 "uuid": "7de51768-7761-51c6-9fc6-5a9b35ae4c7b", 00:13:24.707 "is_configured": true, 00:13:24.707 "data_offset": 0, 00:13:24.707 "data_size": 65536 00:13:24.707 }, 00:13:24.707 { 00:13:24.707 "name": "BaseBdev3", 00:13:24.707 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:24.707 "is_configured": true, 00:13:24.707 "data_offset": 0, 00:13:24.707 "data_size": 65536 00:13:24.707 }, 00:13:24.707 { 00:13:24.707 "name": "BaseBdev4", 00:13:24.707 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:24.707 "is_configured": true, 00:13:24.707 "data_offset": 0, 00:13:24.707 "data_size": 65536 00:13:24.707 } 00:13:24.707 ] 00:13:24.707 }' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.707 [2024-12-12 19:42:07.431941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.707 [2024-12-12 19:42:07.473476] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.707 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.708 "name": "raid_bdev1", 00:13:24.708 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:24.708 "strip_size_kb": 0, 00:13:24.708 "state": "online", 00:13:24.708 "raid_level": "raid1", 00:13:24.708 "superblock": false, 00:13:24.708 "num_base_bdevs": 4, 00:13:24.708 "num_base_bdevs_discovered": 3, 00:13:24.708 "num_base_bdevs_operational": 3, 00:13:24.708 "process": { 00:13:24.708 "type": "rebuild", 00:13:24.708 "target": "spare", 00:13:24.708 "progress": { 00:13:24.708 "blocks": 24576, 00:13:24.708 "percent": 37 00:13:24.708 } 00:13:24.708 }, 00:13:24.708 "base_bdevs_list": [ 00:13:24.708 { 00:13:24.708 "name": "spare", 00:13:24.708 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:24.708 "is_configured": true, 00:13:24.708 "data_offset": 0, 00:13:24.708 "data_size": 65536 00:13:24.708 }, 00:13:24.708 { 00:13:24.708 "name": null, 00:13:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.708 "is_configured": false, 00:13:24.708 "data_offset": 0, 00:13:24.708 "data_size": 65536 00:13:24.708 }, 00:13:24.708 { 00:13:24.708 "name": "BaseBdev3", 00:13:24.708 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:24.708 "is_configured": true, 00:13:24.708 "data_offset": 0, 00:13:24.708 "data_size": 65536 00:13:24.708 }, 00:13:24.708 { 00:13:24.708 "name": "BaseBdev4", 00:13:24.708 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:24.708 "is_configured": true, 00:13:24.708 "data_offset": 0, 00:13:24.708 "data_size": 65536 00:13:24.708 } 00:13:24.708 ] 00:13:24.708 }' 00:13:24.708 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.969 "name": "raid_bdev1", 00:13:24.969 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:24.969 "strip_size_kb": 0, 00:13:24.969 "state": "online", 00:13:24.969 "raid_level": "raid1", 00:13:24.969 "superblock": false, 00:13:24.969 "num_base_bdevs": 4, 00:13:24.969 "num_base_bdevs_discovered": 3, 00:13:24.969 "num_base_bdevs_operational": 3, 00:13:24.969 "process": { 00:13:24.969 "type": "rebuild", 00:13:24.969 "target": "spare", 00:13:24.969 "progress": { 00:13:24.969 "blocks": 26624, 00:13:24.969 "percent": 40 00:13:24.969 } 00:13:24.969 }, 00:13:24.969 "base_bdevs_list": [ 00:13:24.969 { 00:13:24.969 "name": "spare", 00:13:24.969 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:24.969 "is_configured": true, 00:13:24.969 "data_offset": 0, 00:13:24.969 "data_size": 65536 00:13:24.969 }, 00:13:24.969 { 00:13:24.969 "name": null, 00:13:24.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.969 "is_configured": false, 00:13:24.969 "data_offset": 0, 00:13:24.969 "data_size": 65536 00:13:24.969 }, 00:13:24.969 { 00:13:24.969 "name": "BaseBdev3", 00:13:24.969 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:24.969 "is_configured": true, 00:13:24.969 "data_offset": 0, 00:13:24.969 "data_size": 65536 00:13:24.969 }, 00:13:24.969 { 00:13:24.969 "name": "BaseBdev4", 00:13:24.969 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:24.969 "is_configured": true, 00:13:24.969 "data_offset": 0, 00:13:24.969 "data_size": 65536 00:13:24.969 } 00:13:24.969 ] 00:13:24.969 }' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.969 19:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.350 "name": "raid_bdev1", 00:13:26.350 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:26.350 "strip_size_kb": 0, 00:13:26.350 "state": "online", 00:13:26.350 "raid_level": "raid1", 00:13:26.350 "superblock": false, 00:13:26.350 "num_base_bdevs": 4, 00:13:26.350 "num_base_bdevs_discovered": 3, 00:13:26.350 "num_base_bdevs_operational": 3, 00:13:26.350 "process": { 00:13:26.350 "type": "rebuild", 00:13:26.350 "target": "spare", 00:13:26.350 "progress": { 00:13:26.350 "blocks": 49152, 00:13:26.350 "percent": 75 00:13:26.350 } 00:13:26.350 }, 00:13:26.350 "base_bdevs_list": [ 00:13:26.350 { 00:13:26.350 "name": "spare", 00:13:26.350 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:26.350 "is_configured": true, 00:13:26.350 "data_offset": 0, 00:13:26.350 "data_size": 65536 00:13:26.350 }, 00:13:26.350 { 00:13:26.350 "name": null, 00:13:26.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.350 "is_configured": false, 00:13:26.350 "data_offset": 0, 00:13:26.350 "data_size": 65536 00:13:26.350 }, 00:13:26.350 { 00:13:26.350 "name": "BaseBdev3", 00:13:26.350 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:26.350 "is_configured": true, 00:13:26.350 "data_offset": 0, 00:13:26.350 "data_size": 65536 00:13:26.350 }, 00:13:26.350 { 00:13:26.350 "name": "BaseBdev4", 00:13:26.350 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:26.350 "is_configured": true, 00:13:26.350 "data_offset": 0, 00:13:26.350 "data_size": 65536 00:13:26.350 } 00:13:26.350 ] 00:13:26.350 }' 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.350 19:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.920 [2024-12-12 19:42:09.481889] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:26.920 [2024-12-12 19:42:09.481960] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:26.920 [2024-12-12 19:42:09.482008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.180 "name": "raid_bdev1", 00:13:27.180 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:27.180 "strip_size_kb": 0, 00:13:27.180 "state": "online", 00:13:27.180 "raid_level": "raid1", 00:13:27.180 "superblock": false, 00:13:27.180 "num_base_bdevs": 4, 00:13:27.180 "num_base_bdevs_discovered": 3, 00:13:27.180 "num_base_bdevs_operational": 3, 00:13:27.180 "base_bdevs_list": [ 00:13:27.180 { 00:13:27.180 "name": "spare", 00:13:27.180 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:27.180 "is_configured": true, 00:13:27.180 "data_offset": 0, 00:13:27.180 "data_size": 65536 00:13:27.180 }, 00:13:27.180 { 00:13:27.180 "name": null, 00:13:27.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.180 "is_configured": false, 00:13:27.180 "data_offset": 0, 00:13:27.180 "data_size": 65536 00:13:27.180 }, 00:13:27.180 { 00:13:27.180 "name": "BaseBdev3", 00:13:27.180 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:27.180 "is_configured": true, 00:13:27.180 "data_offset": 0, 00:13:27.180 "data_size": 65536 00:13:27.180 }, 00:13:27.180 { 00:13:27.180 "name": "BaseBdev4", 00:13:27.180 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:27.180 "is_configured": true, 00:13:27.180 "data_offset": 0, 00:13:27.180 "data_size": 65536 00:13:27.180 } 00:13:27.180 ] 00:13:27.180 }' 00:13:27.180 19:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.440 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.440 "name": "raid_bdev1", 00:13:27.440 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:27.440 "strip_size_kb": 0, 00:13:27.440 "state": "online", 00:13:27.440 "raid_level": "raid1", 00:13:27.440 "superblock": false, 00:13:27.440 "num_base_bdevs": 4, 00:13:27.440 "num_base_bdevs_discovered": 3, 00:13:27.440 "num_base_bdevs_operational": 3, 00:13:27.440 "base_bdevs_list": [ 00:13:27.440 { 00:13:27.440 "name": "spare", 00:13:27.440 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:27.440 "is_configured": true, 00:13:27.440 "data_offset": 0, 00:13:27.440 "data_size": 65536 00:13:27.440 }, 00:13:27.440 { 00:13:27.440 "name": null, 00:13:27.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.440 "is_configured": false, 00:13:27.440 "data_offset": 0, 00:13:27.440 "data_size": 65536 00:13:27.440 }, 00:13:27.440 { 00:13:27.440 "name": "BaseBdev3", 00:13:27.440 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:27.441 "is_configured": true, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 }, 00:13:27.441 { 00:13:27.441 "name": "BaseBdev4", 00:13:27.441 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:27.441 "is_configured": true, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 } 00:13:27.441 ] 00:13:27.441 }' 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.441 "name": "raid_bdev1", 00:13:27.441 "uuid": "d2b7f3c1-218e-46e6-a798-5674fcf719ee", 00:13:27.441 "strip_size_kb": 0, 00:13:27.441 "state": "online", 00:13:27.441 "raid_level": "raid1", 00:13:27.441 "superblock": false, 00:13:27.441 "num_base_bdevs": 4, 00:13:27.441 "num_base_bdevs_discovered": 3, 00:13:27.441 "num_base_bdevs_operational": 3, 00:13:27.441 "base_bdevs_list": [ 00:13:27.441 { 00:13:27.441 "name": "spare", 00:13:27.441 "uuid": "f7ea1818-f66a-5d6b-90bd-ab8539236981", 00:13:27.441 "is_configured": true, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 }, 00:13:27.441 { 00:13:27.441 "name": null, 00:13:27.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.441 "is_configured": false, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 }, 00:13:27.441 { 00:13:27.441 "name": "BaseBdev3", 00:13:27.441 "uuid": "ff1a9e65-2f0e-5860-9e99-e7dc7dbb8069", 00:13:27.441 "is_configured": true, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 }, 00:13:27.441 { 00:13:27.441 "name": "BaseBdev4", 00:13:27.441 "uuid": "9450792c-3f2b-51a9-b8d7-b0223516fbd9", 00:13:27.441 "is_configured": true, 00:13:27.441 "data_offset": 0, 00:13:27.441 "data_size": 65536 00:13:27.441 } 00:13:27.441 ] 00:13:27.441 }' 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.441 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 [2024-12-12 19:42:10.648764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.011 [2024-12-12 19:42:10.648799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.011 [2024-12-12 19:42:10.648882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.011 [2024-12-12 19:42:10.648960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.011 [2024-12-12 19:42:10.648969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.011 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.271 /dev/nbd0 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.271 1+0 records in 00:13:28.271 1+0 records out 00:13:28.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368358 s, 11.1 MB/s 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.271 19:42:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:28.531 /dev/nbd1 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.531 1+0 records in 00:13:28.531 1+0 records out 00:13:28.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338675 s, 12.1 MB/s 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.531 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.791 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79237 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79237 ']' 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79237 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79237 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79237' 00:13:29.051 killing process with pid 79237 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 79237 00:13:29.051 Received shutdown signal, test time was about 60.000000 seconds 00:13:29.051 00:13:29.051 Latency(us) 00:13:29.051 [2024-12-12T19:42:11.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.051 [2024-12-12T19:42:11.896Z] =================================================================================================================== 00:13:29.051 [2024-12-12T19:42:11.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.051 [2024-12-12 19:42:11.853273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.051 19:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 79237 00:13:29.620 [2024-12-12 19:42:12.333345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:31.002 00:13:31.002 real 0m17.417s 00:13:31.002 user 0m18.822s 00:13:31.002 sys 0m3.183s 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.002 ************************************ 00:13:31.002 END TEST raid_rebuild_test 00:13:31.002 ************************************ 00:13:31.002 19:42:13 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:31.002 19:42:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:31.002 19:42:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.002 19:42:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.002 ************************************ 00:13:31.002 START TEST raid_rebuild_test_sb 00:13:31.002 ************************************ 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.002 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79682 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79682 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79682 ']' 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.003 19:42:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.003 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.003 Zero copy mechanism will not be used. 00:13:31.003 [2024-12-12 19:42:13.604364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:31.003 [2024-12-12 19:42:13.604470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79682 ] 00:13:31.003 [2024-12-12 19:42:13.776571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.262 [2024-12-12 19:42:13.887020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.262 [2024-12-12 19:42:14.080433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.263 [2024-12-12 19:42:14.080494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 BaseBdev1_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 [2024-12-12 19:42:14.474671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:31.833 [2024-12-12 19:42:14.474734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.833 [2024-12-12 19:42:14.474760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:31.833 [2024-12-12 19:42:14.474772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.833 [2024-12-12 19:42:14.476908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.833 [2024-12-12 19:42:14.476948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.833 BaseBdev1 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 BaseBdev2_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 [2024-12-12 19:42:14.527253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:31.833 [2024-12-12 19:42:14.527316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.833 [2024-12-12 19:42:14.527335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:31.833 [2024-12-12 19:42:14.527346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.833 [2024-12-12 19:42:14.529339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.833 [2024-12-12 19:42:14.529379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.833 BaseBdev2 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 BaseBdev3_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 [2024-12-12 19:42:14.595026] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:31.833 [2024-12-12 19:42:14.595080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.833 [2024-12-12 19:42:14.595100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:31.833 [2024-12-12 19:42:14.595111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.833 [2024-12-12 19:42:14.597154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.833 [2024-12-12 19:42:14.597191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.833 BaseBdev3 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 BaseBdev4_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 [2024-12-12 19:42:14.649501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:31.833 [2024-12-12 19:42:14.649567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.833 [2024-12-12 19:42:14.649587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:31.833 [2024-12-12 19:42:14.649597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.833 [2024-12-12 19:42:14.651576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.833 [2024-12-12 19:42:14.651627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:31.833 BaseBdev4 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.833 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.094 spare_malloc 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.094 spare_delay 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.094 [2024-12-12 19:42:14.716820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.094 [2024-12-12 19:42:14.716870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.094 [2024-12-12 19:42:14.716887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:32.094 [2024-12-12 19:42:14.716897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.094 [2024-12-12 19:42:14.718916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.094 [2024-12-12 19:42:14.718953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.094 spare 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.094 [2024-12-12 19:42:14.728829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.094 [2024-12-12 19:42:14.730598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.094 [2024-12-12 19:42:14.730663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.094 [2024-12-12 19:42:14.730711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:32.094 [2024-12-12 19:42:14.730915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:32.094 [2024-12-12 19:42:14.730947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.094 [2024-12-12 19:42:14.731188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:32.094 [2024-12-12 19:42:14.731376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:32.094 [2024-12-12 19:42:14.731394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:32.094 [2024-12-12 19:42:14.731561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.094 "name": "raid_bdev1", 00:13:32.094 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:32.094 "strip_size_kb": 0, 00:13:32.094 "state": "online", 00:13:32.094 "raid_level": "raid1", 00:13:32.094 "superblock": true, 00:13:32.094 "num_base_bdevs": 4, 00:13:32.094 "num_base_bdevs_discovered": 4, 00:13:32.094 "num_base_bdevs_operational": 4, 00:13:32.094 "base_bdevs_list": [ 00:13:32.094 { 00:13:32.094 "name": "BaseBdev1", 00:13:32.094 "uuid": "e5307b6e-e40e-5f87-bfcd-e4dc7643e4f1", 00:13:32.094 "is_configured": true, 00:13:32.094 "data_offset": 2048, 00:13:32.094 "data_size": 63488 00:13:32.094 }, 00:13:32.094 { 00:13:32.094 "name": "BaseBdev2", 00:13:32.094 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:32.094 "is_configured": true, 00:13:32.094 "data_offset": 2048, 00:13:32.094 "data_size": 63488 00:13:32.094 }, 00:13:32.094 { 00:13:32.094 "name": "BaseBdev3", 00:13:32.094 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:32.094 "is_configured": true, 00:13:32.094 "data_offset": 2048, 00:13:32.094 "data_size": 63488 00:13:32.094 }, 00:13:32.094 { 00:13:32.094 "name": "BaseBdev4", 00:13:32.094 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:32.094 "is_configured": true, 00:13:32.094 "data_offset": 2048, 00:13:32.094 "data_size": 63488 00:13:32.094 } 00:13:32.094 ] 00:13:32.094 }' 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.094 19:42:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.354 [2024-12-12 19:42:15.164431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.354 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.614 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:32.614 [2024-12-12 19:42:15.411753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:32.614 /dev/nbd0 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.873 1+0 records in 00:13:32.873 1+0 records out 00:13:32.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031369 s, 13.1 MB/s 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:32.873 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.874 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.874 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:32.874 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:32.874 19:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:39.449 63488+0 records in 00:13:39.449 63488+0 records out 00:13:39.449 32505856 bytes (33 MB, 31 MiB) copied, 5.64407 s, 5.8 MB/s 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:39.449 [2024-12-12 19:42:21.347579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.449 [2024-12-12 19:42:21.383635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.449 "name": "raid_bdev1", 00:13:39.449 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:39.449 "strip_size_kb": 0, 00:13:39.449 "state": "online", 00:13:39.449 "raid_level": "raid1", 00:13:39.449 "superblock": true, 00:13:39.449 "num_base_bdevs": 4, 00:13:39.449 "num_base_bdevs_discovered": 3, 00:13:39.449 "num_base_bdevs_operational": 3, 00:13:39.449 "base_bdevs_list": [ 00:13:39.449 { 00:13:39.449 "name": null, 00:13:39.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.449 "is_configured": false, 00:13:39.449 "data_offset": 0, 00:13:39.449 "data_size": 63488 00:13:39.449 }, 00:13:39.449 { 00:13:39.449 "name": "BaseBdev2", 00:13:39.449 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:39.449 "is_configured": true, 00:13:39.449 "data_offset": 2048, 00:13:39.449 "data_size": 63488 00:13:39.449 }, 00:13:39.449 { 00:13:39.449 "name": "BaseBdev3", 00:13:39.449 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:39.449 "is_configured": true, 00:13:39.449 "data_offset": 2048, 00:13:39.449 "data_size": 63488 00:13:39.449 }, 00:13:39.449 { 00:13:39.449 "name": "BaseBdev4", 00:13:39.449 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:39.449 "is_configured": true, 00:13:39.449 "data_offset": 2048, 00:13:39.449 "data_size": 63488 00:13:39.449 } 00:13:39.449 ] 00:13:39.449 }' 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.449 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.450 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.450 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.450 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.450 [2024-12-12 19:42:21.822843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.450 [2024-12-12 19:42:21.837131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:39.450 19:42:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.450 19:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:39.450 [2024-12-12 19:42:21.839394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.019 19:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.279 "name": "raid_bdev1", 00:13:40.279 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:40.279 "strip_size_kb": 0, 00:13:40.279 "state": "online", 00:13:40.279 "raid_level": "raid1", 00:13:40.279 "superblock": true, 00:13:40.279 "num_base_bdevs": 4, 00:13:40.279 "num_base_bdevs_discovered": 4, 00:13:40.279 "num_base_bdevs_operational": 4, 00:13:40.279 "process": { 00:13:40.279 "type": "rebuild", 00:13:40.279 "target": "spare", 00:13:40.279 "progress": { 00:13:40.279 "blocks": 20480, 00:13:40.279 "percent": 32 00:13:40.279 } 00:13:40.279 }, 00:13:40.279 "base_bdevs_list": [ 00:13:40.279 { 00:13:40.279 "name": "spare", 00:13:40.279 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:40.279 "is_configured": true, 00:13:40.279 "data_offset": 2048, 00:13:40.279 "data_size": 63488 00:13:40.279 }, 00:13:40.279 { 00:13:40.279 "name": "BaseBdev2", 00:13:40.279 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:40.279 "is_configured": true, 00:13:40.279 "data_offset": 2048, 00:13:40.279 "data_size": 63488 00:13:40.279 }, 00:13:40.279 { 00:13:40.279 "name": "BaseBdev3", 00:13:40.279 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:40.279 "is_configured": true, 00:13:40.279 "data_offset": 2048, 00:13:40.279 "data_size": 63488 00:13:40.279 }, 00:13:40.279 { 00:13:40.279 "name": "BaseBdev4", 00:13:40.279 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:40.279 "is_configured": true, 00:13:40.279 "data_offset": 2048, 00:13:40.279 "data_size": 63488 00:13:40.279 } 00:13:40.279 ] 00:13:40.279 }' 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.279 19:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.279 [2024-12-12 19:42:23.003644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.279 [2024-12-12 19:42:23.049368] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:40.279 [2024-12-12 19:42:23.049480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.279 [2024-12-12 19:42:23.049501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.279 [2024-12-12 19:42:23.049517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.279 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.539 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.539 "name": "raid_bdev1", 00:13:40.539 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:40.539 "strip_size_kb": 0, 00:13:40.539 "state": "online", 00:13:40.539 "raid_level": "raid1", 00:13:40.539 "superblock": true, 00:13:40.539 "num_base_bdevs": 4, 00:13:40.539 "num_base_bdevs_discovered": 3, 00:13:40.539 "num_base_bdevs_operational": 3, 00:13:40.539 "base_bdevs_list": [ 00:13:40.539 { 00:13:40.539 "name": null, 00:13:40.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.539 "is_configured": false, 00:13:40.539 "data_offset": 0, 00:13:40.539 "data_size": 63488 00:13:40.539 }, 00:13:40.539 { 00:13:40.539 "name": "BaseBdev2", 00:13:40.539 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:40.539 "is_configured": true, 00:13:40.539 "data_offset": 2048, 00:13:40.539 "data_size": 63488 00:13:40.539 }, 00:13:40.539 { 00:13:40.539 "name": "BaseBdev3", 00:13:40.539 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:40.539 "is_configured": true, 00:13:40.539 "data_offset": 2048, 00:13:40.539 "data_size": 63488 00:13:40.539 }, 00:13:40.539 { 00:13:40.539 "name": "BaseBdev4", 00:13:40.539 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:40.539 "is_configured": true, 00:13:40.539 "data_offset": 2048, 00:13:40.539 "data_size": 63488 00:13:40.539 } 00:13:40.539 ] 00:13:40.539 }' 00:13:40.539 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.539 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.799 "name": "raid_bdev1", 00:13:40.799 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:40.799 "strip_size_kb": 0, 00:13:40.799 "state": "online", 00:13:40.799 "raid_level": "raid1", 00:13:40.799 "superblock": true, 00:13:40.799 "num_base_bdevs": 4, 00:13:40.799 "num_base_bdevs_discovered": 3, 00:13:40.799 "num_base_bdevs_operational": 3, 00:13:40.799 "base_bdevs_list": [ 00:13:40.799 { 00:13:40.799 "name": null, 00:13:40.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.799 "is_configured": false, 00:13:40.799 "data_offset": 0, 00:13:40.799 "data_size": 63488 00:13:40.799 }, 00:13:40.799 { 00:13:40.799 "name": "BaseBdev2", 00:13:40.799 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:40.799 "is_configured": true, 00:13:40.799 "data_offset": 2048, 00:13:40.799 "data_size": 63488 00:13:40.799 }, 00:13:40.799 { 00:13:40.799 "name": "BaseBdev3", 00:13:40.799 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:40.799 "is_configured": true, 00:13:40.799 "data_offset": 2048, 00:13:40.799 "data_size": 63488 00:13:40.799 }, 00:13:40.799 { 00:13:40.799 "name": "BaseBdev4", 00:13:40.799 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:40.799 "is_configured": true, 00:13:40.799 "data_offset": 2048, 00:13:40.799 "data_size": 63488 00:13:40.799 } 00:13:40.799 ] 00:13:40.799 }' 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.799 [2024-12-12 19:42:23.598174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.799 [2024-12-12 19:42:23.614459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.799 19:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:40.799 [2024-12-12 19:42:23.616855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.238 "name": "raid_bdev1", 00:13:42.238 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:42.238 "strip_size_kb": 0, 00:13:42.238 "state": "online", 00:13:42.238 "raid_level": "raid1", 00:13:42.238 "superblock": true, 00:13:42.238 "num_base_bdevs": 4, 00:13:42.238 "num_base_bdevs_discovered": 4, 00:13:42.238 "num_base_bdevs_operational": 4, 00:13:42.238 "process": { 00:13:42.238 "type": "rebuild", 00:13:42.238 "target": "spare", 00:13:42.238 "progress": { 00:13:42.238 "blocks": 20480, 00:13:42.238 "percent": 32 00:13:42.238 } 00:13:42.238 }, 00:13:42.238 "base_bdevs_list": [ 00:13:42.238 { 00:13:42.238 "name": "spare", 00:13:42.238 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:42.238 "is_configured": true, 00:13:42.238 "data_offset": 2048, 00:13:42.238 "data_size": 63488 00:13:42.238 }, 00:13:42.238 { 00:13:42.238 "name": "BaseBdev2", 00:13:42.238 "uuid": "806c24d6-f833-5376-80bf-dab4424dbc68", 00:13:42.238 "is_configured": true, 00:13:42.238 "data_offset": 2048, 00:13:42.238 "data_size": 63488 00:13:42.238 }, 00:13:42.238 { 00:13:42.238 "name": "BaseBdev3", 00:13:42.238 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:42.238 "is_configured": true, 00:13:42.238 "data_offset": 2048, 00:13:42.238 "data_size": 63488 00:13:42.238 }, 00:13:42.238 { 00:13:42.238 "name": "BaseBdev4", 00:13:42.238 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:42.238 "is_configured": true, 00:13:42.238 "data_offset": 2048, 00:13:42.238 "data_size": 63488 00:13:42.238 } 00:13:42.238 ] 00:13:42.238 }' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:42.238 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.238 [2024-12-12 19:42:24.781185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.238 [2024-12-12 19:42:24.927107] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.238 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.238 "name": "raid_bdev1", 00:13:42.238 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:42.238 "strip_size_kb": 0, 00:13:42.238 "state": "online", 00:13:42.238 "raid_level": "raid1", 00:13:42.238 "superblock": true, 00:13:42.238 "num_base_bdevs": 4, 00:13:42.238 "num_base_bdevs_discovered": 3, 00:13:42.238 "num_base_bdevs_operational": 3, 00:13:42.238 "process": { 00:13:42.238 "type": "rebuild", 00:13:42.238 "target": "spare", 00:13:42.238 "progress": { 00:13:42.238 "blocks": 24576, 00:13:42.238 "percent": 38 00:13:42.238 } 00:13:42.238 }, 00:13:42.238 "base_bdevs_list": [ 00:13:42.238 { 00:13:42.238 "name": "spare", 00:13:42.238 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:42.238 "is_configured": true, 00:13:42.238 "data_offset": 2048, 00:13:42.238 "data_size": 63488 00:13:42.238 }, 00:13:42.238 { 00:13:42.238 "name": null, 00:13:42.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.238 "is_configured": false, 00:13:42.238 "data_offset": 0, 00:13:42.238 "data_size": 63488 00:13:42.238 }, 00:13:42.238 { 00:13:42.238 "name": "BaseBdev3", 00:13:42.238 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:42.238 "is_configured": true, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 }, 00:13:42.239 { 00:13:42.239 "name": "BaseBdev4", 00:13:42.239 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:42.239 "is_configured": true, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 } 00:13:42.239 ] 00:13:42.239 }' 00:13:42.239 19:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.239 19:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.497 "name": "raid_bdev1", 00:13:42.497 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:42.497 "strip_size_kb": 0, 00:13:42.497 "state": "online", 00:13:42.497 "raid_level": "raid1", 00:13:42.497 "superblock": true, 00:13:42.497 "num_base_bdevs": 4, 00:13:42.497 "num_base_bdevs_discovered": 3, 00:13:42.497 "num_base_bdevs_operational": 3, 00:13:42.497 "process": { 00:13:42.497 "type": "rebuild", 00:13:42.497 "target": "spare", 00:13:42.497 "progress": { 00:13:42.497 "blocks": 26624, 00:13:42.497 "percent": 41 00:13:42.497 } 00:13:42.497 }, 00:13:42.497 "base_bdevs_list": [ 00:13:42.497 { 00:13:42.497 "name": "spare", 00:13:42.497 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:42.497 "is_configured": true, 00:13:42.497 "data_offset": 2048, 00:13:42.497 "data_size": 63488 00:13:42.497 }, 00:13:42.497 { 00:13:42.497 "name": null, 00:13:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.497 "is_configured": false, 00:13:42.497 "data_offset": 0, 00:13:42.497 "data_size": 63488 00:13:42.497 }, 00:13:42.497 { 00:13:42.497 "name": "BaseBdev3", 00:13:42.497 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:42.497 "is_configured": true, 00:13:42.497 "data_offset": 2048, 00:13:42.497 "data_size": 63488 00:13:42.497 }, 00:13:42.497 { 00:13:42.497 "name": "BaseBdev4", 00:13:42.497 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:42.497 "is_configured": true, 00:13:42.497 "data_offset": 2048, 00:13:42.497 "data_size": 63488 00:13:42.497 } 00:13:42.497 ] 00:13:42.497 }' 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.497 19:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.434 "name": "raid_bdev1", 00:13:43.434 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:43.434 "strip_size_kb": 0, 00:13:43.434 "state": "online", 00:13:43.434 "raid_level": "raid1", 00:13:43.434 "superblock": true, 00:13:43.434 "num_base_bdevs": 4, 00:13:43.434 "num_base_bdevs_discovered": 3, 00:13:43.434 "num_base_bdevs_operational": 3, 00:13:43.434 "process": { 00:13:43.434 "type": "rebuild", 00:13:43.434 "target": "spare", 00:13:43.434 "progress": { 00:13:43.434 "blocks": 49152, 00:13:43.434 "percent": 77 00:13:43.434 } 00:13:43.434 }, 00:13:43.434 "base_bdevs_list": [ 00:13:43.434 { 00:13:43.434 "name": "spare", 00:13:43.434 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:43.434 "is_configured": true, 00:13:43.434 "data_offset": 2048, 00:13:43.434 "data_size": 63488 00:13:43.434 }, 00:13:43.434 { 00:13:43.434 "name": null, 00:13:43.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.434 "is_configured": false, 00:13:43.434 "data_offset": 0, 00:13:43.434 "data_size": 63488 00:13:43.434 }, 00:13:43.434 { 00:13:43.434 "name": "BaseBdev3", 00:13:43.434 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:43.434 "is_configured": true, 00:13:43.434 "data_offset": 2048, 00:13:43.434 "data_size": 63488 00:13:43.434 }, 00:13:43.434 { 00:13:43.434 "name": "BaseBdev4", 00:13:43.434 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:43.434 "is_configured": true, 00:13:43.434 "data_offset": 2048, 00:13:43.434 "data_size": 63488 00:13:43.434 } 00:13:43.434 ] 00:13:43.434 }' 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.434 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.694 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.694 19:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.262 [2024-12-12 19:42:26.843907] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:44.262 [2024-12-12 19:42:26.844045] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:44.262 [2024-12-12 19:42:26.844228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.521 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.781 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.781 "name": "raid_bdev1", 00:13:44.781 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:44.781 "strip_size_kb": 0, 00:13:44.781 "state": "online", 00:13:44.781 "raid_level": "raid1", 00:13:44.781 "superblock": true, 00:13:44.781 "num_base_bdevs": 4, 00:13:44.781 "num_base_bdevs_discovered": 3, 00:13:44.781 "num_base_bdevs_operational": 3, 00:13:44.781 "base_bdevs_list": [ 00:13:44.781 { 00:13:44.781 "name": "spare", 00:13:44.781 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:44.781 "is_configured": true, 00:13:44.781 "data_offset": 2048, 00:13:44.781 "data_size": 63488 00:13:44.781 }, 00:13:44.781 { 00:13:44.781 "name": null, 00:13:44.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.781 "is_configured": false, 00:13:44.781 "data_offset": 0, 00:13:44.781 "data_size": 63488 00:13:44.781 }, 00:13:44.781 { 00:13:44.781 "name": "BaseBdev3", 00:13:44.781 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:44.781 "is_configured": true, 00:13:44.781 "data_offset": 2048, 00:13:44.781 "data_size": 63488 00:13:44.781 }, 00:13:44.781 { 00:13:44.781 "name": "BaseBdev4", 00:13:44.781 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:44.781 "is_configured": true, 00:13:44.781 "data_offset": 2048, 00:13:44.781 "data_size": 63488 00:13:44.781 } 00:13:44.781 ] 00:13:44.781 }' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.782 "name": "raid_bdev1", 00:13:44.782 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:44.782 "strip_size_kb": 0, 00:13:44.782 "state": "online", 00:13:44.782 "raid_level": "raid1", 00:13:44.782 "superblock": true, 00:13:44.782 "num_base_bdevs": 4, 00:13:44.782 "num_base_bdevs_discovered": 3, 00:13:44.782 "num_base_bdevs_operational": 3, 00:13:44.782 "base_bdevs_list": [ 00:13:44.782 { 00:13:44.782 "name": "spare", 00:13:44.782 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:44.782 "is_configured": true, 00:13:44.782 "data_offset": 2048, 00:13:44.782 "data_size": 63488 00:13:44.782 }, 00:13:44.782 { 00:13:44.782 "name": null, 00:13:44.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.782 "is_configured": false, 00:13:44.782 "data_offset": 0, 00:13:44.782 "data_size": 63488 00:13:44.782 }, 00:13:44.782 { 00:13:44.782 "name": "BaseBdev3", 00:13:44.782 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:44.782 "is_configured": true, 00:13:44.782 "data_offset": 2048, 00:13:44.782 "data_size": 63488 00:13:44.782 }, 00:13:44.782 { 00:13:44.782 "name": "BaseBdev4", 00:13:44.782 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:44.782 "is_configured": true, 00:13:44.782 "data_offset": 2048, 00:13:44.782 "data_size": 63488 00:13:44.782 } 00:13:44.782 ] 00:13:44.782 }' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.782 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.042 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.042 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.042 "name": "raid_bdev1", 00:13:45.042 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:45.042 "strip_size_kb": 0, 00:13:45.042 "state": "online", 00:13:45.042 "raid_level": "raid1", 00:13:45.042 "superblock": true, 00:13:45.042 "num_base_bdevs": 4, 00:13:45.042 "num_base_bdevs_discovered": 3, 00:13:45.042 "num_base_bdevs_operational": 3, 00:13:45.042 "base_bdevs_list": [ 00:13:45.042 { 00:13:45.042 "name": "spare", 00:13:45.042 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:45.042 "is_configured": true, 00:13:45.042 "data_offset": 2048, 00:13:45.042 "data_size": 63488 00:13:45.042 }, 00:13:45.042 { 00:13:45.042 "name": null, 00:13:45.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.042 "is_configured": false, 00:13:45.042 "data_offset": 0, 00:13:45.042 "data_size": 63488 00:13:45.042 }, 00:13:45.042 { 00:13:45.042 "name": "BaseBdev3", 00:13:45.042 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:45.042 "is_configured": true, 00:13:45.042 "data_offset": 2048, 00:13:45.042 "data_size": 63488 00:13:45.042 }, 00:13:45.042 { 00:13:45.042 "name": "BaseBdev4", 00:13:45.042 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:45.042 "is_configured": true, 00:13:45.042 "data_offset": 2048, 00:13:45.042 "data_size": 63488 00:13:45.042 } 00:13:45.042 ] 00:13:45.042 }' 00:13:45.042 19:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.042 19:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.302 [2024-12-12 19:42:28.055412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.302 [2024-12-12 19:42:28.055464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.302 [2024-12-12 19:42:28.055610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.302 [2024-12-12 19:42:28.055713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.302 [2024-12-12 19:42:28.055726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.302 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:45.562 /dev/nbd0 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.562 1+0 records in 00:13:45.562 1+0 records out 00:13:45.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346148 s, 11.8 MB/s 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.562 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:45.822 /dev/nbd1 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.822 1+0 records in 00:13:45.822 1+0 records out 00:13:45.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488215 s, 8.4 MB/s 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.822 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.082 19:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.342 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 [2024-12-12 19:42:29.286974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:46.602 [2024-12-12 19:42:29.287062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.602 [2024-12-12 19:42:29.287097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:46.602 [2024-12-12 19:42:29.287111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.602 [2024-12-12 19:42:29.289949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.602 [2024-12-12 19:42:29.289996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:46.602 [2024-12-12 19:42:29.290123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:46.602 [2024-12-12 19:42:29.290209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.602 [2024-12-12 19:42:29.290439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.602 [2024-12-12 19:42:29.290605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.602 spare 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 [2024-12-12 19:42:29.390539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:46.602 [2024-12-12 19:42:29.390646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.602 [2024-12-12 19:42:29.391012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:46.602 [2024-12-12 19:42:29.391236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:46.602 [2024-12-12 19:42:29.391252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:46.602 [2024-12-12 19:42:29.391479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.862 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.862 "name": "raid_bdev1", 00:13:46.862 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:46.862 "strip_size_kb": 0, 00:13:46.862 "state": "online", 00:13:46.862 "raid_level": "raid1", 00:13:46.862 "superblock": true, 00:13:46.862 "num_base_bdevs": 4, 00:13:46.862 "num_base_bdevs_discovered": 3, 00:13:46.862 "num_base_bdevs_operational": 3, 00:13:46.862 "base_bdevs_list": [ 00:13:46.862 { 00:13:46.862 "name": "spare", 00:13:46.862 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:46.862 "is_configured": true, 00:13:46.862 "data_offset": 2048, 00:13:46.862 "data_size": 63488 00:13:46.862 }, 00:13:46.862 { 00:13:46.862 "name": null, 00:13:46.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.862 "is_configured": false, 00:13:46.862 "data_offset": 2048, 00:13:46.862 "data_size": 63488 00:13:46.862 }, 00:13:46.862 { 00:13:46.862 "name": "BaseBdev3", 00:13:46.862 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:46.862 "is_configured": true, 00:13:46.862 "data_offset": 2048, 00:13:46.862 "data_size": 63488 00:13:46.862 }, 00:13:46.862 { 00:13:46.862 "name": "BaseBdev4", 00:13:46.862 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:46.862 "is_configured": true, 00:13:46.862 "data_offset": 2048, 00:13:46.862 "data_size": 63488 00:13:46.862 } 00:13:46.862 ] 00:13:46.862 }' 00:13:46.862 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.862 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.122 "name": "raid_bdev1", 00:13:47.122 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:47.122 "strip_size_kb": 0, 00:13:47.122 "state": "online", 00:13:47.122 "raid_level": "raid1", 00:13:47.122 "superblock": true, 00:13:47.122 "num_base_bdevs": 4, 00:13:47.122 "num_base_bdevs_discovered": 3, 00:13:47.122 "num_base_bdevs_operational": 3, 00:13:47.122 "base_bdevs_list": [ 00:13:47.122 { 00:13:47.122 "name": "spare", 00:13:47.122 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:47.122 "is_configured": true, 00:13:47.122 "data_offset": 2048, 00:13:47.122 "data_size": 63488 00:13:47.122 }, 00:13:47.122 { 00:13:47.122 "name": null, 00:13:47.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.122 "is_configured": false, 00:13:47.122 "data_offset": 2048, 00:13:47.122 "data_size": 63488 00:13:47.122 }, 00:13:47.122 { 00:13:47.122 "name": "BaseBdev3", 00:13:47.122 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:47.122 "is_configured": true, 00:13:47.122 "data_offset": 2048, 00:13:47.122 "data_size": 63488 00:13:47.122 }, 00:13:47.122 { 00:13:47.122 "name": "BaseBdev4", 00:13:47.122 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:47.122 "is_configured": true, 00:13:47.122 "data_offset": 2048, 00:13:47.122 "data_size": 63488 00:13:47.122 } 00:13:47.122 ] 00:13:47.122 }' 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.122 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:47.382 19:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.382 [2024-12-12 19:42:30.030408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.382 "name": "raid_bdev1", 00:13:47.382 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:47.382 "strip_size_kb": 0, 00:13:47.382 "state": "online", 00:13:47.382 "raid_level": "raid1", 00:13:47.382 "superblock": true, 00:13:47.382 "num_base_bdevs": 4, 00:13:47.382 "num_base_bdevs_discovered": 2, 00:13:47.382 "num_base_bdevs_operational": 2, 00:13:47.382 "base_bdevs_list": [ 00:13:47.382 { 00:13:47.382 "name": null, 00:13:47.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.382 "is_configured": false, 00:13:47.382 "data_offset": 0, 00:13:47.382 "data_size": 63488 00:13:47.382 }, 00:13:47.382 { 00:13:47.382 "name": null, 00:13:47.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.382 "is_configured": false, 00:13:47.382 "data_offset": 2048, 00:13:47.382 "data_size": 63488 00:13:47.382 }, 00:13:47.382 { 00:13:47.382 "name": "BaseBdev3", 00:13:47.382 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:47.382 "is_configured": true, 00:13:47.382 "data_offset": 2048, 00:13:47.382 "data_size": 63488 00:13:47.382 }, 00:13:47.382 { 00:13:47.382 "name": "BaseBdev4", 00:13:47.382 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:47.382 "is_configured": true, 00:13:47.382 "data_offset": 2048, 00:13:47.382 "data_size": 63488 00:13:47.382 } 00:13:47.382 ] 00:13:47.382 }' 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.382 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.642 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.642 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.642 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.642 [2024-12-12 19:42:30.469759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.642 [2024-12-12 19:42:30.470148] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.642 [2024-12-12 19:42:30.470255] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:47.642 [2024-12-12 19:42:30.470372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.642 [2024-12-12 19:42:30.485286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:47.901 19:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.901 19:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:47.901 [2024-12-12 19:42:30.487621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.841 "name": "raid_bdev1", 00:13:48.841 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:48.841 "strip_size_kb": 0, 00:13:48.841 "state": "online", 00:13:48.841 "raid_level": "raid1", 00:13:48.841 "superblock": true, 00:13:48.841 "num_base_bdevs": 4, 00:13:48.841 "num_base_bdevs_discovered": 3, 00:13:48.841 "num_base_bdevs_operational": 3, 00:13:48.841 "process": { 00:13:48.841 "type": "rebuild", 00:13:48.841 "target": "spare", 00:13:48.841 "progress": { 00:13:48.841 "blocks": 20480, 00:13:48.841 "percent": 32 00:13:48.841 } 00:13:48.841 }, 00:13:48.841 "base_bdevs_list": [ 00:13:48.841 { 00:13:48.841 "name": "spare", 00:13:48.841 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:48.841 "is_configured": true, 00:13:48.841 "data_offset": 2048, 00:13:48.841 "data_size": 63488 00:13:48.841 }, 00:13:48.841 { 00:13:48.841 "name": null, 00:13:48.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.841 "is_configured": false, 00:13:48.841 "data_offset": 2048, 00:13:48.841 "data_size": 63488 00:13:48.841 }, 00:13:48.841 { 00:13:48.841 "name": "BaseBdev3", 00:13:48.841 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:48.841 "is_configured": true, 00:13:48.841 "data_offset": 2048, 00:13:48.841 "data_size": 63488 00:13:48.841 }, 00:13:48.841 { 00:13:48.841 "name": "BaseBdev4", 00:13:48.841 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:48.841 "is_configured": true, 00:13:48.841 "data_offset": 2048, 00:13:48.841 "data_size": 63488 00:13:48.841 } 00:13:48.841 ] 00:13:48.841 }' 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.841 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.841 [2024-12-12 19:42:31.628160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.101 [2024-12-12 19:42:31.697880] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.101 [2024-12-12 19:42:31.697986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.101 [2024-12-12 19:42:31.698011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.101 [2024-12-12 19:42:31.698024] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.101 "name": "raid_bdev1", 00:13:49.101 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:49.101 "strip_size_kb": 0, 00:13:49.101 "state": "online", 00:13:49.101 "raid_level": "raid1", 00:13:49.101 "superblock": true, 00:13:49.101 "num_base_bdevs": 4, 00:13:49.101 "num_base_bdevs_discovered": 2, 00:13:49.101 "num_base_bdevs_operational": 2, 00:13:49.101 "base_bdevs_list": [ 00:13:49.101 { 00:13:49.101 "name": null, 00:13:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.101 "is_configured": false, 00:13:49.101 "data_offset": 0, 00:13:49.101 "data_size": 63488 00:13:49.101 }, 00:13:49.101 { 00:13:49.101 "name": null, 00:13:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.101 "is_configured": false, 00:13:49.101 "data_offset": 2048, 00:13:49.101 "data_size": 63488 00:13:49.101 }, 00:13:49.101 { 00:13:49.101 "name": "BaseBdev3", 00:13:49.101 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:49.101 "is_configured": true, 00:13:49.101 "data_offset": 2048, 00:13:49.101 "data_size": 63488 00:13:49.101 }, 00:13:49.101 { 00:13:49.101 "name": "BaseBdev4", 00:13:49.101 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:49.101 "is_configured": true, 00:13:49.101 "data_offset": 2048, 00:13:49.101 "data_size": 63488 00:13:49.101 } 00:13:49.101 ] 00:13:49.101 }' 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.101 19:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.361 19:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:49.361 19:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.361 19:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.361 [2024-12-12 19:42:32.166840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:49.361 [2024-12-12 19:42:32.167031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.361 [2024-12-12 19:42:32.167094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:49.361 [2024-12-12 19:42:32.167180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.361 [2024-12-12 19:42:32.167896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.361 [2024-12-12 19:42:32.167983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:49.361 [2024-12-12 19:42:32.168178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:49.361 [2024-12-12 19:42:32.168227] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:49.361 [2024-12-12 19:42:32.168301] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:49.361 [2024-12-12 19:42:32.168400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.361 [2024-12-12 19:42:32.184106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:49.361 spare 00:13:49.361 19:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.361 19:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:49.361 [2024-12-12 19:42:32.186509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.741 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.741 "name": "raid_bdev1", 00:13:50.741 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:50.741 "strip_size_kb": 0, 00:13:50.741 "state": "online", 00:13:50.741 "raid_level": "raid1", 00:13:50.741 "superblock": true, 00:13:50.741 "num_base_bdevs": 4, 00:13:50.741 "num_base_bdevs_discovered": 3, 00:13:50.741 "num_base_bdevs_operational": 3, 00:13:50.741 "process": { 00:13:50.741 "type": "rebuild", 00:13:50.741 "target": "spare", 00:13:50.741 "progress": { 00:13:50.741 "blocks": 20480, 00:13:50.741 "percent": 32 00:13:50.741 } 00:13:50.741 }, 00:13:50.741 "base_bdevs_list": [ 00:13:50.741 { 00:13:50.741 "name": "spare", 00:13:50.741 "uuid": "79420e40-fa00-577c-b097-c68b4402784c", 00:13:50.741 "is_configured": true, 00:13:50.741 "data_offset": 2048, 00:13:50.741 "data_size": 63488 00:13:50.741 }, 00:13:50.741 { 00:13:50.741 "name": null, 00:13:50.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.741 "is_configured": false, 00:13:50.741 "data_offset": 2048, 00:13:50.741 "data_size": 63488 00:13:50.741 }, 00:13:50.741 { 00:13:50.741 "name": "BaseBdev3", 00:13:50.742 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:50.742 "is_configured": true, 00:13:50.742 "data_offset": 2048, 00:13:50.742 "data_size": 63488 00:13:50.742 }, 00:13:50.742 { 00:13:50.742 "name": "BaseBdev4", 00:13:50.742 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:50.742 "is_configured": true, 00:13:50.742 "data_offset": 2048, 00:13:50.742 "data_size": 63488 00:13:50.742 } 00:13:50.742 ] 00:13:50.742 }' 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.742 [2024-12-12 19:42:33.350440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.742 [2024-12-12 19:42:33.397166] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.742 [2024-12-12 19:42:33.397250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.742 [2024-12-12 19:42:33.397270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.742 [2024-12-12 19:42:33.397282] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.742 "name": "raid_bdev1", 00:13:50.742 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:50.742 "strip_size_kb": 0, 00:13:50.742 "state": "online", 00:13:50.742 "raid_level": "raid1", 00:13:50.742 "superblock": true, 00:13:50.742 "num_base_bdevs": 4, 00:13:50.742 "num_base_bdevs_discovered": 2, 00:13:50.742 "num_base_bdevs_operational": 2, 00:13:50.742 "base_bdevs_list": [ 00:13:50.742 { 00:13:50.742 "name": null, 00:13:50.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.742 "is_configured": false, 00:13:50.742 "data_offset": 0, 00:13:50.742 "data_size": 63488 00:13:50.742 }, 00:13:50.742 { 00:13:50.742 "name": null, 00:13:50.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.742 "is_configured": false, 00:13:50.742 "data_offset": 2048, 00:13:50.742 "data_size": 63488 00:13:50.742 }, 00:13:50.742 { 00:13:50.742 "name": "BaseBdev3", 00:13:50.742 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:50.742 "is_configured": true, 00:13:50.742 "data_offset": 2048, 00:13:50.742 "data_size": 63488 00:13:50.742 }, 00:13:50.742 { 00:13:50.742 "name": "BaseBdev4", 00:13:50.742 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:50.742 "is_configured": true, 00:13:50.742 "data_offset": 2048, 00:13:50.742 "data_size": 63488 00:13:50.742 } 00:13:50.742 ] 00:13:50.742 }' 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.742 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.310 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.310 "name": "raid_bdev1", 00:13:51.310 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:51.310 "strip_size_kb": 0, 00:13:51.310 "state": "online", 00:13:51.310 "raid_level": "raid1", 00:13:51.310 "superblock": true, 00:13:51.310 "num_base_bdevs": 4, 00:13:51.310 "num_base_bdevs_discovered": 2, 00:13:51.311 "num_base_bdevs_operational": 2, 00:13:51.311 "base_bdevs_list": [ 00:13:51.311 { 00:13:51.311 "name": null, 00:13:51.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.311 "is_configured": false, 00:13:51.311 "data_offset": 0, 00:13:51.311 "data_size": 63488 00:13:51.311 }, 00:13:51.311 { 00:13:51.311 "name": null, 00:13:51.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.311 "is_configured": false, 00:13:51.311 "data_offset": 2048, 00:13:51.311 "data_size": 63488 00:13:51.311 }, 00:13:51.311 { 00:13:51.311 "name": "BaseBdev3", 00:13:51.311 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:51.311 "is_configured": true, 00:13:51.311 "data_offset": 2048, 00:13:51.311 "data_size": 63488 00:13:51.311 }, 00:13:51.311 { 00:13:51.311 "name": "BaseBdev4", 00:13:51.311 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:51.311 "is_configured": true, 00:13:51.311 "data_offset": 2048, 00:13:51.311 "data_size": 63488 00:13:51.311 } 00:13:51.311 ] 00:13:51.311 }' 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.311 [2024-12-12 19:42:33.985788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:51.311 [2024-12-12 19:42:33.985869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.311 [2024-12-12 19:42:33.985898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:51.311 [2024-12-12 19:42:33.985913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.311 [2024-12-12 19:42:33.986499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.311 [2024-12-12 19:42:33.986526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.311 [2024-12-12 19:42:33.986649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:51.311 [2024-12-12 19:42:33.986673] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:51.311 [2024-12-12 19:42:33.986685] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:51.311 [2024-12-12 19:42:33.986735] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:51.311 BaseBdev1 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.311 19:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.287 19:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.287 "name": "raid_bdev1", 00:13:52.287 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:52.287 "strip_size_kb": 0, 00:13:52.287 "state": "online", 00:13:52.287 "raid_level": "raid1", 00:13:52.287 "superblock": true, 00:13:52.287 "num_base_bdevs": 4, 00:13:52.287 "num_base_bdevs_discovered": 2, 00:13:52.287 "num_base_bdevs_operational": 2, 00:13:52.287 "base_bdevs_list": [ 00:13:52.287 { 00:13:52.287 "name": null, 00:13:52.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.287 "is_configured": false, 00:13:52.287 "data_offset": 0, 00:13:52.287 "data_size": 63488 00:13:52.287 }, 00:13:52.287 { 00:13:52.287 "name": null, 00:13:52.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.287 "is_configured": false, 00:13:52.287 "data_offset": 2048, 00:13:52.287 "data_size": 63488 00:13:52.287 }, 00:13:52.287 { 00:13:52.287 "name": "BaseBdev3", 00:13:52.287 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:52.287 "is_configured": true, 00:13:52.287 "data_offset": 2048, 00:13:52.287 "data_size": 63488 00:13:52.287 }, 00:13:52.287 { 00:13:52.287 "name": "BaseBdev4", 00:13:52.287 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:52.287 "is_configured": true, 00:13:52.287 "data_offset": 2048, 00:13:52.287 "data_size": 63488 00:13:52.287 } 00:13:52.287 ] 00:13:52.287 }' 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.287 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.854 "name": "raid_bdev1", 00:13:52.854 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:52.854 "strip_size_kb": 0, 00:13:52.854 "state": "online", 00:13:52.854 "raid_level": "raid1", 00:13:52.854 "superblock": true, 00:13:52.854 "num_base_bdevs": 4, 00:13:52.854 "num_base_bdevs_discovered": 2, 00:13:52.854 "num_base_bdevs_operational": 2, 00:13:52.854 "base_bdevs_list": [ 00:13:52.854 { 00:13:52.854 "name": null, 00:13:52.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.854 "is_configured": false, 00:13:52.854 "data_offset": 0, 00:13:52.854 "data_size": 63488 00:13:52.854 }, 00:13:52.854 { 00:13:52.854 "name": null, 00:13:52.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.854 "is_configured": false, 00:13:52.854 "data_offset": 2048, 00:13:52.854 "data_size": 63488 00:13:52.854 }, 00:13:52.854 { 00:13:52.854 "name": "BaseBdev3", 00:13:52.854 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:52.854 "is_configured": true, 00:13:52.854 "data_offset": 2048, 00:13:52.854 "data_size": 63488 00:13:52.855 }, 00:13:52.855 { 00:13:52.855 "name": "BaseBdev4", 00:13:52.855 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:52.855 "is_configured": true, 00:13:52.855 "data_offset": 2048, 00:13:52.855 "data_size": 63488 00:13:52.855 } 00:13:52.855 ] 00:13:52.855 }' 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.855 [2024-12-12 19:42:35.575393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.855 request: 00:13:52.855 [2024-12-12 19:42:35.575810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:52.855 [2024-12-12 19:42:35.575839] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:52.855 { 00:13:52.855 "base_bdev": "BaseBdev1", 00:13:52.855 "raid_bdev": "raid_bdev1", 00:13:52.855 "method": "bdev_raid_add_base_bdev", 00:13:52.855 "req_id": 1 00:13:52.855 } 00:13:52.855 Got JSON-RPC error response 00:13:52.855 response: 00:13:52.855 { 00:13:52.855 "code": -22, 00:13:52.855 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:52.855 } 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.855 19:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.795 19:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.055 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.055 "name": "raid_bdev1", 00:13:54.055 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:54.055 "strip_size_kb": 0, 00:13:54.055 "state": "online", 00:13:54.055 "raid_level": "raid1", 00:13:54.055 "superblock": true, 00:13:54.055 "num_base_bdevs": 4, 00:13:54.055 "num_base_bdevs_discovered": 2, 00:13:54.055 "num_base_bdevs_operational": 2, 00:13:54.055 "base_bdevs_list": [ 00:13:54.055 { 00:13:54.055 "name": null, 00:13:54.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.055 "is_configured": false, 00:13:54.055 "data_offset": 0, 00:13:54.055 "data_size": 63488 00:13:54.055 }, 00:13:54.055 { 00:13:54.055 "name": null, 00:13:54.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.055 "is_configured": false, 00:13:54.055 "data_offset": 2048, 00:13:54.055 "data_size": 63488 00:13:54.055 }, 00:13:54.055 { 00:13:54.055 "name": "BaseBdev3", 00:13:54.055 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:54.055 "is_configured": true, 00:13:54.055 "data_offset": 2048, 00:13:54.055 "data_size": 63488 00:13:54.055 }, 00:13:54.055 { 00:13:54.055 "name": "BaseBdev4", 00:13:54.055 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:54.055 "is_configured": true, 00:13:54.055 "data_offset": 2048, 00:13:54.055 "data_size": 63488 00:13:54.055 } 00:13:54.055 ] 00:13:54.055 }' 00:13:54.055 19:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.055 19:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.314 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.314 "name": "raid_bdev1", 00:13:54.314 "uuid": "a3b22312-b392-488c-ac84-e0d5e422d41e", 00:13:54.314 "strip_size_kb": 0, 00:13:54.314 "state": "online", 00:13:54.314 "raid_level": "raid1", 00:13:54.314 "superblock": true, 00:13:54.314 "num_base_bdevs": 4, 00:13:54.314 "num_base_bdevs_discovered": 2, 00:13:54.314 "num_base_bdevs_operational": 2, 00:13:54.314 "base_bdevs_list": [ 00:13:54.314 { 00:13:54.314 "name": null, 00:13:54.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.314 "is_configured": false, 00:13:54.314 "data_offset": 0, 00:13:54.314 "data_size": 63488 00:13:54.314 }, 00:13:54.314 { 00:13:54.314 "name": null, 00:13:54.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.314 "is_configured": false, 00:13:54.314 "data_offset": 2048, 00:13:54.314 "data_size": 63488 00:13:54.314 }, 00:13:54.314 { 00:13:54.314 "name": "BaseBdev3", 00:13:54.315 "uuid": "7bcd7a18-6c99-5298-8b8b-86b40cc975be", 00:13:54.315 "is_configured": true, 00:13:54.315 "data_offset": 2048, 00:13:54.315 "data_size": 63488 00:13:54.315 }, 00:13:54.315 { 00:13:54.315 "name": "BaseBdev4", 00:13:54.315 "uuid": "7b84d033-1922-56f3-84a6-7d2c098141f7", 00:13:54.315 "is_configured": true, 00:13:54.315 "data_offset": 2048, 00:13:54.315 "data_size": 63488 00:13:54.315 } 00:13:54.315 ] 00:13:54.315 }' 00:13:54.315 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.315 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.315 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79682 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79682 ']' 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79682 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79682 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79682' 00:13:54.575 killing process with pid 79682 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79682 00:13:54.575 Received shutdown signal, test time was about 60.000000 seconds 00:13:54.575 00:13:54.575 Latency(us) 00:13:54.575 [2024-12-12T19:42:37.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.575 [2024-12-12T19:42:37.420Z] =================================================================================================================== 00:13:54.575 [2024-12-12T19:42:37.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.575 [2024-12-12 19:42:37.221704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.575 19:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79682 00:13:54.575 [2024-12-12 19:42:37.221901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.575 [2024-12-12 19:42:37.222005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.575 [2024-12-12 19:42:37.222034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:55.145 [2024-12-12 19:42:37.749526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.526 19:42:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:56.526 ************************************ 00:13:56.526 END TEST raid_rebuild_test_sb 00:13:56.526 ************************************ 00:13:56.526 00:13:56.526 real 0m25.482s 00:13:56.526 user 0m30.142s 00:13:56.526 sys 0m3.883s 00:13:56.526 19:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.526 19:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 19:42:39 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:56.526 19:42:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:56.526 19:42:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.526 19:42:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 ************************************ 00:13:56.526 START TEST raid_rebuild_test_io 00:13:56.526 ************************************ 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80441 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80441 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 80441 ']' 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.526 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 [2024-12-12 19:42:39.167917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:56.526 [2024-12-12 19:42:39.168090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.526 Zero copy mechanism will not be used. 00:13:56.526 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80441 ] 00:13:56.526 [2024-12-12 19:42:39.340059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.786 [2024-12-12 19:42:39.481943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.045 [2024-12-12 19:42:39.724935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.045 [2024-12-12 19:42:39.725117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.306 19:42:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 BaseBdev1_malloc 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 [2024-12-12 19:42:40.055868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.306 [2024-12-12 19:42:40.055996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.306 [2024-12-12 19:42:40.056028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:57.306 [2024-12-12 19:42:40.056043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.306 [2024-12-12 19:42:40.058727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.306 [2024-12-12 19:42:40.058883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.306 BaseBdev1 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 BaseBdev2_malloc 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 [2024-12-12 19:42:40.118878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:57.306 [2024-12-12 19:42:40.118993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.306 [2024-12-12 19:42:40.119019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:57.306 [2024-12-12 19:42:40.119039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.306 [2024-12-12 19:42:40.121521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.306 [2024-12-12 19:42:40.121584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.306 BaseBdev2 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.306 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 BaseBdev3_malloc 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 [2024-12-12 19:42:40.193021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:57.569 [2024-12-12 19:42:40.193179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.569 [2024-12-12 19:42:40.193210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:57.569 [2024-12-12 19:42:40.193225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.569 [2024-12-12 19:42:40.195732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.569 [2024-12-12 19:42:40.195780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.569 BaseBdev3 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 BaseBdev4_malloc 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 [2024-12-12 19:42:40.253310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:57.569 [2024-12-12 19:42:40.253405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.569 [2024-12-12 19:42:40.253433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:57.569 [2024-12-12 19:42:40.253446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.569 [2024-12-12 19:42:40.255945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.569 [2024-12-12 19:42:40.255996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.569 BaseBdev4 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 spare_malloc 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 spare_delay 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 [2024-12-12 19:42:40.327706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.569 [2024-12-12 19:42:40.327789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.569 [2024-12-12 19:42:40.327827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:57.569 [2024-12-12 19:42:40.327841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.569 [2024-12-12 19:42:40.330295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.569 [2024-12-12 19:42:40.330348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.569 spare 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 [2024-12-12 19:42:40.339739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.569 [2024-12-12 19:42:40.341919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.569 [2024-12-12 19:42:40.342066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.569 [2024-12-12 19:42:40.342134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.569 [2024-12-12 19:42:40.342266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:57.569 [2024-12-12 19:42:40.342288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:57.569 [2024-12-12 19:42:40.342586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:57.569 [2024-12-12 19:42:40.342799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:57.569 [2024-12-12 19:42:40.342814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:57.569 [2024-12-12 19:42:40.342979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.569 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.569 "name": "raid_bdev1", 00:13:57.569 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:13:57.569 "strip_size_kb": 0, 00:13:57.569 "state": "online", 00:13:57.569 "raid_level": "raid1", 00:13:57.569 "superblock": false, 00:13:57.569 "num_base_bdevs": 4, 00:13:57.569 "num_base_bdevs_discovered": 4, 00:13:57.569 "num_base_bdevs_operational": 4, 00:13:57.569 "base_bdevs_list": [ 00:13:57.569 { 00:13:57.569 "name": "BaseBdev1", 00:13:57.569 "uuid": "5a2935ca-3f7b-56a0-91f7-3bb9cc65361c", 00:13:57.569 "is_configured": true, 00:13:57.569 "data_offset": 0, 00:13:57.569 "data_size": 65536 00:13:57.569 }, 00:13:57.569 { 00:13:57.569 "name": "BaseBdev2", 00:13:57.569 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:13:57.569 "is_configured": true, 00:13:57.569 "data_offset": 0, 00:13:57.569 "data_size": 65536 00:13:57.569 }, 00:13:57.569 { 00:13:57.569 "name": "BaseBdev3", 00:13:57.569 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:13:57.569 "is_configured": true, 00:13:57.569 "data_offset": 0, 00:13:57.570 "data_size": 65536 00:13:57.570 }, 00:13:57.570 { 00:13:57.570 "name": "BaseBdev4", 00:13:57.570 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:13:57.570 "is_configured": true, 00:13:57.570 "data_offset": 0, 00:13:57.570 "data_size": 65536 00:13:57.570 } 00:13:57.570 ] 00:13:57.570 }' 00:13:57.570 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.570 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 [2024-12-12 19:42:40.823363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 [2024-12-12 19:42:40.906844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.139 "name": "raid_bdev1", 00:13:58.139 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:13:58.139 "strip_size_kb": 0, 00:13:58.139 "state": "online", 00:13:58.139 "raid_level": "raid1", 00:13:58.139 "superblock": false, 00:13:58.139 "num_base_bdevs": 4, 00:13:58.139 "num_base_bdevs_discovered": 3, 00:13:58.139 "num_base_bdevs_operational": 3, 00:13:58.139 "base_bdevs_list": [ 00:13:58.139 { 00:13:58.139 "name": null, 00:13:58.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.139 "is_configured": false, 00:13:58.139 "data_offset": 0, 00:13:58.139 "data_size": 65536 00:13:58.139 }, 00:13:58.139 { 00:13:58.139 "name": "BaseBdev2", 00:13:58.139 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:13:58.139 "is_configured": true, 00:13:58.139 "data_offset": 0, 00:13:58.139 "data_size": 65536 00:13:58.139 }, 00:13:58.139 { 00:13:58.139 "name": "BaseBdev3", 00:13:58.139 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:13:58.139 "is_configured": true, 00:13:58.139 "data_offset": 0, 00:13:58.139 "data_size": 65536 00:13:58.139 }, 00:13:58.139 { 00:13:58.139 "name": "BaseBdev4", 00:13:58.139 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:13:58.139 "is_configured": true, 00:13:58.139 "data_offset": 0, 00:13:58.139 "data_size": 65536 00:13:58.139 } 00:13:58.139 ] 00:13:58.139 }' 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.139 19:42:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.398 [2024-12-12 19:42:41.004786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:58.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.398 Zero copy mechanism will not be used. 00:13:58.398 Running I/O for 60 seconds... 00:13:58.658 19:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.658 19:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.658 19:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.658 [2024-12-12 19:42:41.354913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.658 19:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.658 19:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:58.658 [2024-12-12 19:42:41.425766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:58.658 [2024-12-12 19:42:41.428025] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.918 [2024-12-12 19:42:41.550147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.918 [2024-12-12 19:42:41.550726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.918 [2024-12-12 19:42:41.761700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.918 [2024-12-12 19:42:41.762535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.438 144.00 IOPS, 432.00 MiB/s [2024-12-12T19:42:42.283Z] [2024-12-12 19:42:42.240781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.698 "name": "raid_bdev1", 00:13:59.698 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:13:59.698 "strip_size_kb": 0, 00:13:59.698 "state": "online", 00:13:59.698 "raid_level": "raid1", 00:13:59.698 "superblock": false, 00:13:59.698 "num_base_bdevs": 4, 00:13:59.698 "num_base_bdevs_discovered": 4, 00:13:59.698 "num_base_bdevs_operational": 4, 00:13:59.698 "process": { 00:13:59.698 "type": "rebuild", 00:13:59.698 "target": "spare", 00:13:59.698 "progress": { 00:13:59.698 "blocks": 12288, 00:13:59.698 "percent": 18 00:13:59.698 } 00:13:59.698 }, 00:13:59.698 "base_bdevs_list": [ 00:13:59.698 { 00:13:59.698 "name": "spare", 00:13:59.698 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:13:59.698 "is_configured": true, 00:13:59.698 "data_offset": 0, 00:13:59.698 "data_size": 65536 00:13:59.698 }, 00:13:59.698 { 00:13:59.698 "name": "BaseBdev2", 00:13:59.698 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:13:59.698 "is_configured": true, 00:13:59.698 "data_offset": 0, 00:13:59.698 "data_size": 65536 00:13:59.698 }, 00:13:59.698 { 00:13:59.698 "name": "BaseBdev3", 00:13:59.698 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:13:59.698 "is_configured": true, 00:13:59.698 "data_offset": 0, 00:13:59.698 "data_size": 65536 00:13:59.698 }, 00:13:59.698 { 00:13:59.698 "name": "BaseBdev4", 00:13:59.698 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:13:59.698 "is_configured": true, 00:13:59.698 "data_offset": 0, 00:13:59.698 "data_size": 65536 00:13:59.698 } 00:13:59.698 ] 00:13:59.698 }' 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.698 [2024-12-12 19:42:42.478211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.698 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.958 [2024-12-12 19:42:42.566159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.958 [2024-12-12 19:42:42.598365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:59.958 [2024-12-12 19:42:42.700387] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:59.958 [2024-12-12 19:42:42.704230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.958 [2024-12-12 19:42:42.704320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.958 [2024-12-12 19:42:42.704347] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:59.958 [2024-12-12 19:42:42.727288] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.958 "name": "raid_bdev1", 00:13:59.958 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:13:59.958 "strip_size_kb": 0, 00:13:59.958 "state": "online", 00:13:59.958 "raid_level": "raid1", 00:13:59.958 "superblock": false, 00:13:59.958 "num_base_bdevs": 4, 00:13:59.958 "num_base_bdevs_discovered": 3, 00:13:59.958 "num_base_bdevs_operational": 3, 00:13:59.958 "base_bdevs_list": [ 00:13:59.958 { 00:13:59.958 "name": null, 00:13:59.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.958 "is_configured": false, 00:13:59.958 "data_offset": 0, 00:13:59.958 "data_size": 65536 00:13:59.958 }, 00:13:59.958 { 00:13:59.958 "name": "BaseBdev2", 00:13:59.958 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:13:59.958 "is_configured": true, 00:13:59.958 "data_offset": 0, 00:13:59.958 "data_size": 65536 00:13:59.958 }, 00:13:59.958 { 00:13:59.958 "name": "BaseBdev3", 00:13:59.958 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:13:59.958 "is_configured": true, 00:13:59.958 "data_offset": 0, 00:13:59.958 "data_size": 65536 00:13:59.958 }, 00:13:59.958 { 00:13:59.958 "name": "BaseBdev4", 00:13:59.958 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:13:59.958 "is_configured": true, 00:13:59.958 "data_offset": 0, 00:13:59.958 "data_size": 65536 00:13:59.958 } 00:13:59.958 ] 00:13:59.958 }' 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.958 19:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.477 141.50 IOPS, 424.50 MiB/s [2024-12-12T19:42:43.322Z] 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.477 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.477 "name": "raid_bdev1", 00:14:00.477 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:00.477 "strip_size_kb": 0, 00:14:00.477 "state": "online", 00:14:00.477 "raid_level": "raid1", 00:14:00.477 "superblock": false, 00:14:00.477 "num_base_bdevs": 4, 00:14:00.477 "num_base_bdevs_discovered": 3, 00:14:00.477 "num_base_bdevs_operational": 3, 00:14:00.477 "base_bdevs_list": [ 00:14:00.477 { 00:14:00.477 "name": null, 00:14:00.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.477 "is_configured": false, 00:14:00.477 "data_offset": 0, 00:14:00.478 "data_size": 65536 00:14:00.478 }, 00:14:00.478 { 00:14:00.478 "name": "BaseBdev2", 00:14:00.478 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:14:00.478 "is_configured": true, 00:14:00.478 "data_offset": 0, 00:14:00.478 "data_size": 65536 00:14:00.478 }, 00:14:00.478 { 00:14:00.478 "name": "BaseBdev3", 00:14:00.478 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:00.478 "is_configured": true, 00:14:00.478 "data_offset": 0, 00:14:00.478 "data_size": 65536 00:14:00.478 }, 00:14:00.478 { 00:14:00.478 "name": "BaseBdev4", 00:14:00.478 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:00.478 "is_configured": true, 00:14:00.478 "data_offset": 0, 00:14:00.478 "data_size": 65536 00:14:00.478 } 00:14:00.478 ] 00:14:00.478 }' 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.478 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.478 [2024-12-12 19:42:43.311599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.737 19:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.737 19:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:00.737 [2024-12-12 19:42:43.376240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:00.737 [2024-12-12 19:42:43.378145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.737 [2024-12-12 19:42:43.499723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.737 [2024-12-12 19:42:43.500093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.997 [2024-12-12 19:42:43.616206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.997 [2024-12-12 19:42:43.616523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.257 [2024-12-12 19:42:43.889143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:01.257 [2024-12-12 19:42:43.890578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:01.516 158.00 IOPS, 474.00 MiB/s [2024-12-12T19:42:44.361Z] [2024-12-12 19:42:44.106751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.516 [2024-12-12 19:42:44.107647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.516 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.775 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.775 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.775 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.775 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.776 "name": "raid_bdev1", 00:14:01.776 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:01.776 "strip_size_kb": 0, 00:14:01.776 "state": "online", 00:14:01.776 "raid_level": "raid1", 00:14:01.776 "superblock": false, 00:14:01.776 "num_base_bdevs": 4, 00:14:01.776 "num_base_bdevs_discovered": 4, 00:14:01.776 "num_base_bdevs_operational": 4, 00:14:01.776 "process": { 00:14:01.776 "type": "rebuild", 00:14:01.776 "target": "spare", 00:14:01.776 "progress": { 00:14:01.776 "blocks": 12288, 00:14:01.776 "percent": 18 00:14:01.776 } 00:14:01.776 }, 00:14:01.776 "base_bdevs_list": [ 00:14:01.776 { 00:14:01.776 "name": "spare", 00:14:01.776 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:01.776 "is_configured": true, 00:14:01.776 "data_offset": 0, 00:14:01.776 "data_size": 65536 00:14:01.776 }, 00:14:01.776 { 00:14:01.776 "name": "BaseBdev2", 00:14:01.776 "uuid": "3ad1835d-6163-56ab-81a3-2b9413aa4fc6", 00:14:01.776 "is_configured": true, 00:14:01.776 "data_offset": 0, 00:14:01.776 "data_size": 65536 00:14:01.776 }, 00:14:01.776 { 00:14:01.776 "name": "BaseBdev3", 00:14:01.776 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:01.776 "is_configured": true, 00:14:01.776 "data_offset": 0, 00:14:01.776 "data_size": 65536 00:14:01.776 }, 00:14:01.776 { 00:14:01.776 "name": "BaseBdev4", 00:14:01.776 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:01.776 "is_configured": true, 00:14:01.776 "data_offset": 0, 00:14:01.776 "data_size": 65536 00:14:01.776 } 00:14:01.776 ] 00:14:01.776 }' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.776 [2024-12-12 19:42:44.515258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.776 [2024-12-12 19:42:44.580455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.776 [2024-12-12 19:42:44.580884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.776 [2024-12-12 19:42:44.582226] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:01.776 [2024-12-12 19:42:44.582294] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:01.776 [2024-12-12 19:42:44.582356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.776 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.035 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.035 "name": "raid_bdev1", 00:14:02.035 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:02.035 "strip_size_kb": 0, 00:14:02.035 "state": "online", 00:14:02.035 "raid_level": "raid1", 00:14:02.035 "superblock": false, 00:14:02.035 "num_base_bdevs": 4, 00:14:02.035 "num_base_bdevs_discovered": 3, 00:14:02.035 "num_base_bdevs_operational": 3, 00:14:02.035 "process": { 00:14:02.035 "type": "rebuild", 00:14:02.035 "target": "spare", 00:14:02.035 "progress": { 00:14:02.035 "blocks": 16384, 00:14:02.035 "percent": 25 00:14:02.035 } 00:14:02.035 }, 00:14:02.035 "base_bdevs_list": [ 00:14:02.035 { 00:14:02.035 "name": "spare", 00:14:02.035 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:02.035 "is_configured": true, 00:14:02.035 "data_offset": 0, 00:14:02.035 "data_size": 65536 00:14:02.035 }, 00:14:02.035 { 00:14:02.035 "name": null, 00:14:02.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.035 "is_configured": false, 00:14:02.035 "data_offset": 0, 00:14:02.035 "data_size": 65536 00:14:02.035 }, 00:14:02.035 { 00:14:02.035 "name": "BaseBdev3", 00:14:02.035 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:02.035 "is_configured": true, 00:14:02.035 "data_offset": 0, 00:14:02.035 "data_size": 65536 00:14:02.035 }, 00:14:02.035 { 00:14:02.036 "name": "BaseBdev4", 00:14:02.036 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:02.036 "is_configured": true, 00:14:02.036 "data_offset": 0, 00:14:02.036 "data_size": 65536 00:14:02.036 } 00:14:02.036 ] 00:14:02.036 }' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.036 "name": "raid_bdev1", 00:14:02.036 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:02.036 "strip_size_kb": 0, 00:14:02.036 "state": "online", 00:14:02.036 "raid_level": "raid1", 00:14:02.036 "superblock": false, 00:14:02.036 "num_base_bdevs": 4, 00:14:02.036 "num_base_bdevs_discovered": 3, 00:14:02.036 "num_base_bdevs_operational": 3, 00:14:02.036 "process": { 00:14:02.036 "type": "rebuild", 00:14:02.036 "target": "spare", 00:14:02.036 "progress": { 00:14:02.036 "blocks": 18432, 00:14:02.036 "percent": 28 00:14:02.036 } 00:14:02.036 }, 00:14:02.036 "base_bdevs_list": [ 00:14:02.036 { 00:14:02.036 "name": "spare", 00:14:02.036 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:02.036 "is_configured": true, 00:14:02.036 "data_offset": 0, 00:14:02.036 "data_size": 65536 00:14:02.036 }, 00:14:02.036 { 00:14:02.036 "name": null, 00:14:02.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.036 "is_configured": false, 00:14:02.036 "data_offset": 0, 00:14:02.036 "data_size": 65536 00:14:02.036 }, 00:14:02.036 { 00:14:02.036 "name": "BaseBdev3", 00:14:02.036 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:02.036 "is_configured": true, 00:14:02.036 "data_offset": 0, 00:14:02.036 "data_size": 65536 00:14:02.036 }, 00:14:02.036 { 00:14:02.036 "name": "BaseBdev4", 00:14:02.036 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:02.036 "is_configured": true, 00:14:02.036 "data_offset": 0, 00:14:02.036 "data_size": 65536 00:14:02.036 } 00:14:02.036 ] 00:14:02.036 }' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.036 19:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.296 [2024-12-12 19:42:44.944647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:02.558 137.25 IOPS, 411.75 MiB/s [2024-12-12T19:42:45.403Z] [2024-12-12 19:42:45.173415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:02.558 [2024-12-12 19:42:45.284864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:03.134 [2024-12-12 19:42:45.810662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:03.134 [2024-12-12 19:42:45.811300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.134 "name": "raid_bdev1", 00:14:03.134 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:03.134 "strip_size_kb": 0, 00:14:03.134 "state": "online", 00:14:03.134 "raid_level": "raid1", 00:14:03.134 "superblock": false, 00:14:03.134 "num_base_bdevs": 4, 00:14:03.134 "num_base_bdevs_discovered": 3, 00:14:03.134 "num_base_bdevs_operational": 3, 00:14:03.134 "process": { 00:14:03.134 "type": "rebuild", 00:14:03.134 "target": "spare", 00:14:03.134 "progress": { 00:14:03.134 "blocks": 38912, 00:14:03.134 "percent": 59 00:14:03.134 } 00:14:03.134 }, 00:14:03.134 "base_bdevs_list": [ 00:14:03.134 { 00:14:03.134 "name": "spare", 00:14:03.134 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:03.134 "is_configured": true, 00:14:03.134 "data_offset": 0, 00:14:03.134 "data_size": 65536 00:14:03.134 }, 00:14:03.134 { 00:14:03.134 "name": null, 00:14:03.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.134 "is_configured": false, 00:14:03.134 "data_offset": 0, 00:14:03.134 "data_size": 65536 00:14:03.134 }, 00:14:03.134 { 00:14:03.134 "name": "BaseBdev3", 00:14:03.134 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:03.134 "is_configured": true, 00:14:03.134 "data_offset": 0, 00:14:03.134 "data_size": 65536 00:14:03.134 }, 00:14:03.134 { 00:14:03.134 "name": "BaseBdev4", 00:14:03.134 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:03.134 "is_configured": true, 00:14:03.134 "data_offset": 0, 00:14:03.134 "data_size": 65536 00:14:03.134 } 00:14:03.134 ] 00:14:03.134 }' 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.134 19:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.393 122.80 IOPS, 368.40 MiB/s [2024-12-12T19:42:46.238Z] 19:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.393 19:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.393 [2024-12-12 19:42:46.186747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:03.653 [2024-12-12 19:42:46.295274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:03.653 [2024-12-12 19:42:46.295587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:04.223 108.50 IOPS, 325.50 MiB/s [2024-12-12T19:42:47.068Z] 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.223 19:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.485 "name": "raid_bdev1", 00:14:04.485 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:04.485 "strip_size_kb": 0, 00:14:04.485 "state": "online", 00:14:04.485 "raid_level": "raid1", 00:14:04.485 "superblock": false, 00:14:04.485 "num_base_bdevs": 4, 00:14:04.485 "num_base_bdevs_discovered": 3, 00:14:04.485 "num_base_bdevs_operational": 3, 00:14:04.485 "process": { 00:14:04.485 "type": "rebuild", 00:14:04.485 "target": "spare", 00:14:04.485 "progress": { 00:14:04.485 "blocks": 57344, 00:14:04.485 "percent": 87 00:14:04.485 } 00:14:04.485 }, 00:14:04.485 "base_bdevs_list": [ 00:14:04.485 { 00:14:04.485 "name": "spare", 00:14:04.485 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:04.485 "is_configured": true, 00:14:04.485 "data_offset": 0, 00:14:04.485 "data_size": 65536 00:14:04.485 }, 00:14:04.485 { 00:14:04.485 "name": null, 00:14:04.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.485 "is_configured": false, 00:14:04.485 "data_offset": 0, 00:14:04.485 "data_size": 65536 00:14:04.485 }, 00:14:04.485 { 00:14:04.485 "name": "BaseBdev3", 00:14:04.485 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:04.485 "is_configured": true, 00:14:04.485 "data_offset": 0, 00:14:04.485 "data_size": 65536 00:14:04.485 }, 00:14:04.485 { 00:14:04.485 "name": "BaseBdev4", 00:14:04.485 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:04.485 "is_configured": true, 00:14:04.485 "data_offset": 0, 00:14:04.485 "data_size": 65536 00:14:04.485 } 00:14:04.485 ] 00:14:04.485 }' 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.485 19:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.745 [2024-12-12 19:42:47.395864] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:04.745 [2024-12-12 19:42:47.495660] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:04.745 [2024-12-12 19:42:47.497303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.314 96.86 IOPS, 290.57 MiB/s [2024-12-12T19:42:48.159Z] 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.314 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.314 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.314 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.315 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.315 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.575 "name": "raid_bdev1", 00:14:05.575 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:05.575 "strip_size_kb": 0, 00:14:05.575 "state": "online", 00:14:05.575 "raid_level": "raid1", 00:14:05.575 "superblock": false, 00:14:05.575 "num_base_bdevs": 4, 00:14:05.575 "num_base_bdevs_discovered": 3, 00:14:05.575 "num_base_bdevs_operational": 3, 00:14:05.575 "base_bdevs_list": [ 00:14:05.575 { 00:14:05.575 "name": "spare", 00:14:05.575 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": null, 00:14:05.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.575 "is_configured": false, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": "BaseBdev3", 00:14:05.575 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": "BaseBdev4", 00:14:05.575 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 } 00:14:05.575 ] 00:14:05.575 }' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.575 "name": "raid_bdev1", 00:14:05.575 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:05.575 "strip_size_kb": 0, 00:14:05.575 "state": "online", 00:14:05.575 "raid_level": "raid1", 00:14:05.575 "superblock": false, 00:14:05.575 "num_base_bdevs": 4, 00:14:05.575 "num_base_bdevs_discovered": 3, 00:14:05.575 "num_base_bdevs_operational": 3, 00:14:05.575 "base_bdevs_list": [ 00:14:05.575 { 00:14:05.575 "name": "spare", 00:14:05.575 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": null, 00:14:05.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.575 "is_configured": false, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": "BaseBdev3", 00:14:05.575 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 }, 00:14:05.575 { 00:14:05.575 "name": "BaseBdev4", 00:14:05.575 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:05.575 "is_configured": true, 00:14:05.575 "data_offset": 0, 00:14:05.575 "data_size": 65536 00:14:05.575 } 00:14:05.575 ] 00:14:05.575 }' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.575 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.835 "name": "raid_bdev1", 00:14:05.835 "uuid": "f26453a0-a3f2-402b-a853-1f73e4a833bd", 00:14:05.835 "strip_size_kb": 0, 00:14:05.835 "state": "online", 00:14:05.835 "raid_level": "raid1", 00:14:05.835 "superblock": false, 00:14:05.835 "num_base_bdevs": 4, 00:14:05.835 "num_base_bdevs_discovered": 3, 00:14:05.835 "num_base_bdevs_operational": 3, 00:14:05.835 "base_bdevs_list": [ 00:14:05.835 { 00:14:05.835 "name": "spare", 00:14:05.835 "uuid": "2fbfc660-feb7-5562-a972-91c3386afa30", 00:14:05.835 "is_configured": true, 00:14:05.835 "data_offset": 0, 00:14:05.835 "data_size": 65536 00:14:05.835 }, 00:14:05.835 { 00:14:05.835 "name": null, 00:14:05.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.835 "is_configured": false, 00:14:05.835 "data_offset": 0, 00:14:05.835 "data_size": 65536 00:14:05.835 }, 00:14:05.835 { 00:14:05.835 "name": "BaseBdev3", 00:14:05.835 "uuid": "f92502c3-e646-502a-891f-2cc180c7f8a9", 00:14:05.835 "is_configured": true, 00:14:05.835 "data_offset": 0, 00:14:05.835 "data_size": 65536 00:14:05.835 }, 00:14:05.835 { 00:14:05.835 "name": "BaseBdev4", 00:14:05.835 "uuid": "31ce389e-2c06-5e72-8163-7b8bb01683de", 00:14:05.835 "is_configured": true, 00:14:05.835 "data_offset": 0, 00:14:05.835 "data_size": 65536 00:14:05.835 } 00:14:05.835 ] 00:14:05.835 }' 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.835 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.095 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.095 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.095 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.095 [2024-12-12 19:42:48.872147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.095 [2024-12-12 19:42:48.872184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.095 00:14:06.095 Latency(us) 00:14:06.095 [2024-12-12T19:42:48.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.095 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:06.095 raid_bdev1 : 7.92 89.98 269.94 0.00 0.00 15568.16 336.27 118136.51 00:14:06.095 [2024-12-12T19:42:48.940Z] =================================================================================================================== 00:14:06.095 [2024-12-12T19:42:48.940Z] Total : 89.98 269.94 0.00 0.00 15568.16 336.27 118136.51 00:14:06.095 [2024-12-12 19:42:48.937327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.095 [2024-12-12 19:42:48.937444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.095 [2024-12-12 19:42:48.937592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.095 [2024-12-12 19:42:48.937656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.095 { 00:14:06.095 "results": [ 00:14:06.095 { 00:14:06.095 "job": "raid_bdev1", 00:14:06.095 "core_mask": "0x1", 00:14:06.095 "workload": "randrw", 00:14:06.095 "percentage": 50, 00:14:06.095 "status": "finished", 00:14:06.095 "queue_depth": 2, 00:14:06.095 "io_size": 3145728, 00:14:06.095 "runtime": 7.92386, 00:14:06.095 "iops": 89.98139795503707, 00:14:06.095 "mibps": 269.9441938651112, 00:14:06.095 "io_failed": 0, 00:14:06.095 "io_timeout": 0, 00:14:06.095 "avg_latency_us": 15568.157788298413, 00:14:06.095 "min_latency_us": 336.2655021834061, 00:14:06.095 "max_latency_us": 118136.51004366812 00:14:06.095 } 00:14:06.095 ], 00:14:06.095 "core_count": 1 00:14:06.095 } 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.355 19:42:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:06.355 /dev/nbd0 00:14:06.614 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.615 1+0 records in 00:14:06.615 1+0 records out 00:14:06.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439327 s, 9.3 MB/s 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.615 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:06.615 /dev/nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.875 1+0 records in 00:14:06.875 1+0 records out 00:14:06.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493286 s, 8.3 MB/s 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.875 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.135 19:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:07.395 /dev/nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.395 1+0 records in 00:14:07.395 1+0 records out 00:14:07.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225433 s, 18.2 MB/s 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.395 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.655 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 80441 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 80441 ']' 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 80441 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80441 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80441' 00:14:07.915 killing process with pid 80441 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 80441 00:14:07.915 Received shutdown signal, test time was about 9.721216 seconds 00:14:07.915 00:14:07.915 Latency(us) 00:14:07.915 [2024-12-12T19:42:50.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.915 [2024-12-12T19:42:50.760Z] =================================================================================================================== 00:14:07.915 [2024-12-12T19:42:50.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.915 [2024-12-12 19:42:50.709908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.915 19:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 80441 00:14:08.483 [2024-12-12 19:42:51.111264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.424 19:42:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.424 00:14:09.424 real 0m13.181s 00:14:09.424 user 0m16.346s 00:14:09.424 sys 0m2.083s 00:14:09.424 19:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.424 ************************************ 00:14:09.424 END TEST raid_rebuild_test_io 00:14:09.424 ************************************ 00:14:09.424 19:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.773 19:42:52 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:09.773 19:42:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:09.773 19:42:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.773 19:42:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.773 ************************************ 00:14:09.773 START TEST raid_rebuild_test_sb_io 00:14:09.773 ************************************ 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:09.773 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80848 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80848 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 80848 ']' 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.774 19:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.774 [2024-12-12 19:42:52.418796] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:09.774 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:09.774 Zero copy mechanism will not be used. 00:14:09.774 [2024-12-12 19:42:52.418985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80848 ] 00:14:09.774 [2024-12-12 19:42:52.592409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.033 [2024-12-12 19:42:52.705868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.292 [2024-12-12 19:42:52.893422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.292 [2024-12-12 19:42:52.893548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.552 BaseBdev1_malloc 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.552 [2024-12-12 19:42:53.306199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.552 [2024-12-12 19:42:53.306271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.552 [2024-12-12 19:42:53.306293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.552 [2024-12-12 19:42:53.306305] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.552 [2024-12-12 19:42:53.308345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.552 [2024-12-12 19:42:53.308428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.552 BaseBdev1 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.552 BaseBdev2_malloc 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.552 [2024-12-12 19:42:53.361336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:10.552 [2024-12-12 19:42:53.361418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.552 [2024-12-12 19:42:53.361438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.552 [2024-12-12 19:42:53.361448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.552 [2024-12-12 19:42:53.363528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.552 [2024-12-12 19:42:53.363658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.552 BaseBdev2 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.552 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.823 BaseBdev3_malloc 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.823 [2024-12-12 19:42:53.428660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:10.823 [2024-12-12 19:42:53.428717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.823 [2024-12-12 19:42:53.428754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.823 [2024-12-12 19:42:53.428766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.823 [2024-12-12 19:42:53.430779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.823 [2024-12-12 19:42:53.430871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.823 BaseBdev3 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.823 BaseBdev4_malloc 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.823 [2024-12-12 19:42:53.481189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:10.823 [2024-12-12 19:42:53.481250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.823 [2024-12-12 19:42:53.481268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:10.823 [2024-12-12 19:42:53.481278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.823 [2024-12-12 19:42:53.483304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.823 [2024-12-12 19:42:53.483347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.823 BaseBdev4 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.823 spare_malloc 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.823 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.824 spare_delay 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.824 [2024-12-12 19:42:53.546961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.824 [2024-12-12 19:42:53.547015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.824 [2024-12-12 19:42:53.547048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:10.824 [2024-12-12 19:42:53.547058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.824 [2024-12-12 19:42:53.548966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.824 [2024-12-12 19:42:53.549081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.824 spare 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.824 [2024-12-12 19:42:53.558985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.824 [2024-12-12 19:42:53.560670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.824 [2024-12-12 19:42:53.560728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.824 [2024-12-12 19:42:53.560774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.824 [2024-12-12 19:42:53.560947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:10.824 [2024-12-12 19:42:53.560965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.824 [2024-12-12 19:42:53.561176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:10.824 [2024-12-12 19:42:53.561339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:10.824 [2024-12-12 19:42:53.561348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:10.824 [2024-12-12 19:42:53.561482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.824 "name": "raid_bdev1", 00:14:10.824 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:10.824 "strip_size_kb": 0, 00:14:10.824 "state": "online", 00:14:10.824 "raid_level": "raid1", 00:14:10.824 "superblock": true, 00:14:10.824 "num_base_bdevs": 4, 00:14:10.824 "num_base_bdevs_discovered": 4, 00:14:10.824 "num_base_bdevs_operational": 4, 00:14:10.824 "base_bdevs_list": [ 00:14:10.824 { 00:14:10.824 "name": "BaseBdev1", 00:14:10.824 "uuid": "f855ac25-c84f-5836-a2c9-b9f92c34568f", 00:14:10.824 "is_configured": true, 00:14:10.824 "data_offset": 2048, 00:14:10.824 "data_size": 63488 00:14:10.824 }, 00:14:10.824 { 00:14:10.824 "name": "BaseBdev2", 00:14:10.824 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:10.824 "is_configured": true, 00:14:10.824 "data_offset": 2048, 00:14:10.824 "data_size": 63488 00:14:10.824 }, 00:14:10.824 { 00:14:10.824 "name": "BaseBdev3", 00:14:10.824 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:10.824 "is_configured": true, 00:14:10.824 "data_offset": 2048, 00:14:10.824 "data_size": 63488 00:14:10.824 }, 00:14:10.824 { 00:14:10.824 "name": "BaseBdev4", 00:14:10.824 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:10.824 "is_configured": true, 00:14:10.824 "data_offset": 2048, 00:14:10.824 "data_size": 63488 00:14:10.824 } 00:14:10.824 ] 00:14:10.824 }' 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.824 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.394 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:11.394 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.394 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.394 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.394 [2024-12-12 19:42:53.974716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.395 19:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 [2024-12-12 19:42:54.078257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.395 "name": "raid_bdev1", 00:14:11.395 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:11.395 "strip_size_kb": 0, 00:14:11.395 "state": "online", 00:14:11.395 "raid_level": "raid1", 00:14:11.395 "superblock": true, 00:14:11.395 "num_base_bdevs": 4, 00:14:11.395 "num_base_bdevs_discovered": 3, 00:14:11.395 "num_base_bdevs_operational": 3, 00:14:11.395 "base_bdevs_list": [ 00:14:11.395 { 00:14:11.395 "name": null, 00:14:11.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.395 "is_configured": false, 00:14:11.395 "data_offset": 0, 00:14:11.395 "data_size": 63488 00:14:11.395 }, 00:14:11.395 { 00:14:11.395 "name": "BaseBdev2", 00:14:11.395 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:11.395 "is_configured": true, 00:14:11.395 "data_offset": 2048, 00:14:11.395 "data_size": 63488 00:14:11.395 }, 00:14:11.395 { 00:14:11.395 "name": "BaseBdev3", 00:14:11.395 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:11.395 "is_configured": true, 00:14:11.395 "data_offset": 2048, 00:14:11.395 "data_size": 63488 00:14:11.395 }, 00:14:11.395 { 00:14:11.395 "name": "BaseBdev4", 00:14:11.395 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:11.395 "is_configured": true, 00:14:11.395 "data_offset": 2048, 00:14:11.395 "data_size": 63488 00:14:11.395 } 00:14:11.395 ] 00:14:11.395 }' 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.395 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 [2024-12-12 19:42:54.174078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:11.395 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.395 Zero copy mechanism will not be used. 00:14:11.395 Running I/O for 60 seconds... 00:14:11.964 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.964 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.964 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.964 [2024-12-12 19:42:54.547263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.964 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.964 19:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:11.964 [2024-12-12 19:42:54.601281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:11.964 [2024-12-12 19:42:54.603179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.964 [2024-12-12 19:42:54.723319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.964 [2024-12-12 19:42:54.724831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:12.224 [2024-12-12 19:42:54.926959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.224 [2024-12-12 19:42:54.927712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.483 126.00 IOPS, 378.00 MiB/s [2024-12-12T19:42:55.328Z] [2024-12-12 19:42:55.270101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:12.743 [2024-12-12 19:42:55.423653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:12.743 [2024-12-12 19:42:55.424474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.003 "name": "raid_bdev1", 00:14:13.003 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:13.003 "strip_size_kb": 0, 00:14:13.003 "state": "online", 00:14:13.003 "raid_level": "raid1", 00:14:13.003 "superblock": true, 00:14:13.003 "num_base_bdevs": 4, 00:14:13.003 "num_base_bdevs_discovered": 4, 00:14:13.003 "num_base_bdevs_operational": 4, 00:14:13.003 "process": { 00:14:13.003 "type": "rebuild", 00:14:13.003 "target": "spare", 00:14:13.003 "progress": { 00:14:13.003 "blocks": 10240, 00:14:13.003 "percent": 16 00:14:13.003 } 00:14:13.003 }, 00:14:13.003 "base_bdevs_list": [ 00:14:13.003 { 00:14:13.003 "name": "spare", 00:14:13.003 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:13.003 "is_configured": true, 00:14:13.003 "data_offset": 2048, 00:14:13.003 "data_size": 63488 00:14:13.003 }, 00:14:13.003 { 00:14:13.003 "name": "BaseBdev2", 00:14:13.003 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:13.003 "is_configured": true, 00:14:13.003 "data_offset": 2048, 00:14:13.003 "data_size": 63488 00:14:13.003 }, 00:14:13.003 { 00:14:13.003 "name": "BaseBdev3", 00:14:13.003 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:13.003 "is_configured": true, 00:14:13.003 "data_offset": 2048, 00:14:13.003 "data_size": 63488 00:14:13.003 }, 00:14:13.003 { 00:14:13.003 "name": "BaseBdev4", 00:14:13.003 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:13.003 "is_configured": true, 00:14:13.003 "data_offset": 2048, 00:14:13.003 "data_size": 63488 00:14:13.003 } 00:14:13.003 ] 00:14:13.003 }' 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.003 [2024-12-12 19:42:55.725731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.003 [2024-12-12 19:42:55.768664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.003 [2024-12-12 19:42:55.769117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.003 [2024-12-12 19:42:55.775101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.003 [2024-12-12 19:42:55.784996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.003 [2024-12-12 19:42:55.785072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.003 [2024-12-12 19:42:55.785098] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:13.003 [2024-12-12 19:42:55.807701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.003 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.263 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.263 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.263 "name": "raid_bdev1", 00:14:13.263 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:13.263 "strip_size_kb": 0, 00:14:13.263 "state": "online", 00:14:13.263 "raid_level": "raid1", 00:14:13.263 "superblock": true, 00:14:13.263 "num_base_bdevs": 4, 00:14:13.263 "num_base_bdevs_discovered": 3, 00:14:13.263 "num_base_bdevs_operational": 3, 00:14:13.263 "base_bdevs_list": [ 00:14:13.263 { 00:14:13.263 "name": null, 00:14:13.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.263 "is_configured": false, 00:14:13.263 "data_offset": 0, 00:14:13.263 "data_size": 63488 00:14:13.263 }, 00:14:13.263 { 00:14:13.263 "name": "BaseBdev2", 00:14:13.263 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:13.263 "is_configured": true, 00:14:13.263 "data_offset": 2048, 00:14:13.263 "data_size": 63488 00:14:13.263 }, 00:14:13.263 { 00:14:13.263 "name": "BaseBdev3", 00:14:13.263 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:13.263 "is_configured": true, 00:14:13.263 "data_offset": 2048, 00:14:13.263 "data_size": 63488 00:14:13.263 }, 00:14:13.263 { 00:14:13.263 "name": "BaseBdev4", 00:14:13.263 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:13.263 "is_configured": true, 00:14:13.263 "data_offset": 2048, 00:14:13.263 "data_size": 63488 00:14:13.263 } 00:14:13.263 ] 00:14:13.263 }' 00:14:13.263 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.263 19:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.523 127.50 IOPS, 382.50 MiB/s [2024-12-12T19:42:56.368Z] 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.523 "name": "raid_bdev1", 00:14:13.523 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:13.523 "strip_size_kb": 0, 00:14:13.523 "state": "online", 00:14:13.523 "raid_level": "raid1", 00:14:13.523 "superblock": true, 00:14:13.523 "num_base_bdevs": 4, 00:14:13.523 "num_base_bdevs_discovered": 3, 00:14:13.523 "num_base_bdevs_operational": 3, 00:14:13.523 "base_bdevs_list": [ 00:14:13.523 { 00:14:13.523 "name": null, 00:14:13.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.523 "is_configured": false, 00:14:13.523 "data_offset": 0, 00:14:13.523 "data_size": 63488 00:14:13.523 }, 00:14:13.523 { 00:14:13.523 "name": "BaseBdev2", 00:14:13.523 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:13.523 "is_configured": true, 00:14:13.523 "data_offset": 2048, 00:14:13.523 "data_size": 63488 00:14:13.523 }, 00:14:13.523 { 00:14:13.523 "name": "BaseBdev3", 00:14:13.523 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:13.523 "is_configured": true, 00:14:13.523 "data_offset": 2048, 00:14:13.523 "data_size": 63488 00:14:13.523 }, 00:14:13.523 { 00:14:13.523 "name": "BaseBdev4", 00:14:13.523 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:13.523 "is_configured": true, 00:14:13.523 "data_offset": 2048, 00:14:13.523 "data_size": 63488 00:14:13.523 } 00:14:13.523 ] 00:14:13.523 }' 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.523 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.783 [2024-12-12 19:42:56.424172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.783 19:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:13.783 [2024-12-12 19:42:56.477926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:13.783 [2024-12-12 19:42:56.479744] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.783 [2024-12-12 19:42:56.581232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.783 [2024-12-12 19:42:56.581710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.043 [2024-12-12 19:42:56.798606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.043 [2024-12-12 19:42:56.798921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.612 138.00 IOPS, 414.00 MiB/s [2024-12-12T19:42:57.457Z] [2024-12-12 19:42:57.236749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.612 [2024-12-12 19:42:57.237459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.872 "name": "raid_bdev1", 00:14:14.872 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:14.872 "strip_size_kb": 0, 00:14:14.872 "state": "online", 00:14:14.872 "raid_level": "raid1", 00:14:14.872 "superblock": true, 00:14:14.872 "num_base_bdevs": 4, 00:14:14.872 "num_base_bdevs_discovered": 4, 00:14:14.872 "num_base_bdevs_operational": 4, 00:14:14.872 "process": { 00:14:14.872 "type": "rebuild", 00:14:14.872 "target": "spare", 00:14:14.872 "progress": { 00:14:14.872 "blocks": 12288, 00:14:14.872 "percent": 19 00:14:14.872 } 00:14:14.872 }, 00:14:14.872 "base_bdevs_list": [ 00:14:14.872 { 00:14:14.872 "name": "spare", 00:14:14.872 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:14.872 "is_configured": true, 00:14:14.872 "data_offset": 2048, 00:14:14.872 "data_size": 63488 00:14:14.872 }, 00:14:14.872 { 00:14:14.872 "name": "BaseBdev2", 00:14:14.872 "uuid": "6fb80702-fb5e-5d43-90bd-362f3f6d16e1", 00:14:14.872 "is_configured": true, 00:14:14.872 "data_offset": 2048, 00:14:14.872 "data_size": 63488 00:14:14.872 }, 00:14:14.872 { 00:14:14.872 "name": "BaseBdev3", 00:14:14.872 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:14.872 "is_configured": true, 00:14:14.872 "data_offset": 2048, 00:14:14.872 "data_size": 63488 00:14:14.872 }, 00:14:14.872 { 00:14:14.872 "name": "BaseBdev4", 00:14:14.872 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:14.872 "is_configured": true, 00:14:14.872 "data_offset": 2048, 00:14:14.872 "data_size": 63488 00:14:14.872 } 00:14:14.872 ] 00:14:14.872 }' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:14.872 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.872 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.872 [2024-12-12 19:42:57.600984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.132 [2024-12-12 19:42:57.723312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:15.132 [2024-12-12 19:42:57.724093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:15.132 [2024-12-12 19:42:57.927249] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:15.132 [2024-12-12 19:42:57.927333] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:15.132 [2024-12-12 19:42:57.933472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.132 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.392 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.392 "name": "raid_bdev1", 00:14:15.392 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:15.392 "strip_size_kb": 0, 00:14:15.392 "state": "online", 00:14:15.392 "raid_level": "raid1", 00:14:15.392 "superblock": true, 00:14:15.392 "num_base_bdevs": 4, 00:14:15.392 "num_base_bdevs_discovered": 3, 00:14:15.392 "num_base_bdevs_operational": 3, 00:14:15.392 "process": { 00:14:15.392 "type": "rebuild", 00:14:15.392 "target": "spare", 00:14:15.392 "progress": { 00:14:15.392 "blocks": 16384, 00:14:15.392 "percent": 25 00:14:15.392 } 00:14:15.392 }, 00:14:15.392 "base_bdevs_list": [ 00:14:15.392 { 00:14:15.392 "name": "spare", 00:14:15.392 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": null, 00:14:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.392 "is_configured": false, 00:14:15.392 "data_offset": 0, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": "BaseBdev3", 00:14:15.392 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": "BaseBdev4", 00:14:15.392 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 } 00:14:15.392 ] 00:14:15.392 }' 00:14:15.392 19:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.392 "name": "raid_bdev1", 00:14:15.392 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:15.392 "strip_size_kb": 0, 00:14:15.392 "state": "online", 00:14:15.392 "raid_level": "raid1", 00:14:15.392 "superblock": true, 00:14:15.392 "num_base_bdevs": 4, 00:14:15.392 "num_base_bdevs_discovered": 3, 00:14:15.392 "num_base_bdevs_operational": 3, 00:14:15.392 "process": { 00:14:15.392 "type": "rebuild", 00:14:15.392 "target": "spare", 00:14:15.392 "progress": { 00:14:15.392 "blocks": 16384, 00:14:15.392 "percent": 25 00:14:15.392 } 00:14:15.392 }, 00:14:15.392 "base_bdevs_list": [ 00:14:15.392 { 00:14:15.392 "name": "spare", 00:14:15.392 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": null, 00:14:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.392 "is_configured": false, 00:14:15.392 "data_offset": 0, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": "BaseBdev3", 00:14:15.392 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 }, 00:14:15.392 { 00:14:15.392 "name": "BaseBdev4", 00:14:15.392 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:15.392 "is_configured": true, 00:14:15.392 "data_offset": 2048, 00:14:15.392 "data_size": 63488 00:14:15.392 } 00:14:15.392 ] 00:14:15.392 }' 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.392 119.25 IOPS, 357.75 MiB/s [2024-12-12T19:42:58.237Z] 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.392 19:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.652 [2024-12-12 19:42:58.256214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:15.652 [2024-12-12 19:42:58.257332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:15.652 [2024-12-12 19:42:58.466884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:15.912 [2024-12-12 19:42:58.697643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:16.172 [2024-12-12 19:42:58.818810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:16.432 [2024-12-12 19:42:59.166646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:16.432 [2024-12-12 19:42:59.167528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:16.432 107.60 IOPS, 322.80 MiB/s [2024-12-12T19:42:59.277Z] 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.432 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.432 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.432 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.433 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.692 "name": "raid_bdev1", 00:14:16.692 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:16.692 "strip_size_kb": 0, 00:14:16.692 "state": "online", 00:14:16.692 "raid_level": "raid1", 00:14:16.692 "superblock": true, 00:14:16.692 "num_base_bdevs": 4, 00:14:16.692 "num_base_bdevs_discovered": 3, 00:14:16.692 "num_base_bdevs_operational": 3, 00:14:16.692 "process": { 00:14:16.692 "type": "rebuild", 00:14:16.692 "target": "spare", 00:14:16.692 "progress": { 00:14:16.692 "blocks": 32768, 00:14:16.692 "percent": 51 00:14:16.692 } 00:14:16.692 }, 00:14:16.692 "base_bdevs_list": [ 00:14:16.692 { 00:14:16.692 "name": "spare", 00:14:16.692 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:16.692 "is_configured": true, 00:14:16.692 "data_offset": 2048, 00:14:16.692 "data_size": 63488 00:14:16.692 }, 00:14:16.692 { 00:14:16.692 "name": null, 00:14:16.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.692 "is_configured": false, 00:14:16.692 "data_offset": 0, 00:14:16.692 "data_size": 63488 00:14:16.692 }, 00:14:16.692 { 00:14:16.692 "name": "BaseBdev3", 00:14:16.692 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:16.692 "is_configured": true, 00:14:16.692 "data_offset": 2048, 00:14:16.692 "data_size": 63488 00:14:16.692 }, 00:14:16.692 { 00:14:16.692 "name": "BaseBdev4", 00:14:16.692 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:16.692 "is_configured": true, 00:14:16.692 "data_offset": 2048, 00:14:16.692 "data_size": 63488 00:14:16.692 } 00:14:16.692 ] 00:14:16.692 }' 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.692 19:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.693 [2024-12-12 19:42:59.397099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:17.262 [2024-12-12 19:42:59.997202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:17.522 96.50 IOPS, 289.50 MiB/s [2024-12-12T19:43:00.367Z] [2024-12-12 19:43:00.214069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:17.522 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.522 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.782 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.782 "name": "raid_bdev1", 00:14:17.782 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:17.782 "strip_size_kb": 0, 00:14:17.782 "state": "online", 00:14:17.782 "raid_level": "raid1", 00:14:17.782 "superblock": true, 00:14:17.782 "num_base_bdevs": 4, 00:14:17.782 "num_base_bdevs_discovered": 3, 00:14:17.782 "num_base_bdevs_operational": 3, 00:14:17.782 "process": { 00:14:17.782 "type": "rebuild", 00:14:17.782 "target": "spare", 00:14:17.782 "progress": { 00:14:17.782 "blocks": 49152, 00:14:17.782 "percent": 77 00:14:17.782 } 00:14:17.782 }, 00:14:17.782 "base_bdevs_list": [ 00:14:17.782 { 00:14:17.782 "name": "spare", 00:14:17.782 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:17.782 "is_configured": true, 00:14:17.783 "data_offset": 2048, 00:14:17.783 "data_size": 63488 00:14:17.783 }, 00:14:17.783 { 00:14:17.783 "name": null, 00:14:17.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.783 "is_configured": false, 00:14:17.783 "data_offset": 0, 00:14:17.783 "data_size": 63488 00:14:17.783 }, 00:14:17.783 { 00:14:17.783 "name": "BaseBdev3", 00:14:17.783 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:17.783 "is_configured": true, 00:14:17.783 "data_offset": 2048, 00:14:17.783 "data_size": 63488 00:14:17.783 }, 00:14:17.783 { 00:14:17.783 "name": "BaseBdev4", 00:14:17.783 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:17.783 "is_configured": true, 00:14:17.783 "data_offset": 2048, 00:14:17.783 "data_size": 63488 00:14:17.783 } 00:14:17.783 ] 00:14:17.783 }' 00:14:17.783 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.783 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.783 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.783 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.783 19:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.352 [2024-12-12 19:43:01.106716] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:18.612 88.57 IOPS, 265.71 MiB/s [2024-12-12T19:43:01.457Z] [2024-12-12 19:43:01.206553] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:18.612 [2024-12-12 19:43:01.209133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.873 "name": "raid_bdev1", 00:14:18.873 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:18.873 "strip_size_kb": 0, 00:14:18.873 "state": "online", 00:14:18.873 "raid_level": "raid1", 00:14:18.873 "superblock": true, 00:14:18.873 "num_base_bdevs": 4, 00:14:18.873 "num_base_bdevs_discovered": 3, 00:14:18.873 "num_base_bdevs_operational": 3, 00:14:18.873 "base_bdevs_list": [ 00:14:18.873 { 00:14:18.873 "name": "spare", 00:14:18.873 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:18.873 "is_configured": true, 00:14:18.873 "data_offset": 2048, 00:14:18.873 "data_size": 63488 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "name": null, 00:14:18.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.873 "is_configured": false, 00:14:18.873 "data_offset": 0, 00:14:18.873 "data_size": 63488 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "name": "BaseBdev3", 00:14:18.873 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:18.873 "is_configured": true, 00:14:18.873 "data_offset": 2048, 00:14:18.873 "data_size": 63488 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "name": "BaseBdev4", 00:14:18.873 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:18.873 "is_configured": true, 00:14:18.873 "data_offset": 2048, 00:14:18.873 "data_size": 63488 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }' 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.873 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.133 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.133 "name": "raid_bdev1", 00:14:19.133 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:19.133 "strip_size_kb": 0, 00:14:19.133 "state": "online", 00:14:19.133 "raid_level": "raid1", 00:14:19.133 "superblock": true, 00:14:19.133 "num_base_bdevs": 4, 00:14:19.133 "num_base_bdevs_discovered": 3, 00:14:19.133 "num_base_bdevs_operational": 3, 00:14:19.133 "base_bdevs_list": [ 00:14:19.133 { 00:14:19.133 "name": "spare", 00:14:19.133 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:19.133 "is_configured": true, 00:14:19.133 "data_offset": 2048, 00:14:19.133 "data_size": 63488 00:14:19.133 }, 00:14:19.133 { 00:14:19.133 "name": null, 00:14:19.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.134 "is_configured": false, 00:14:19.134 "data_offset": 0, 00:14:19.134 "data_size": 63488 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "name": "BaseBdev3", 00:14:19.134 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:19.134 "is_configured": true, 00:14:19.134 "data_offset": 2048, 00:14:19.134 "data_size": 63488 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "name": "BaseBdev4", 00:14:19.134 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:19.134 "is_configured": true, 00:14:19.134 "data_offset": 2048, 00:14:19.134 "data_size": 63488 00:14:19.134 } 00:14:19.134 ] 00:14:19.134 }' 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.134 "name": "raid_bdev1", 00:14:19.134 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:19.134 "strip_size_kb": 0, 00:14:19.134 "state": "online", 00:14:19.134 "raid_level": "raid1", 00:14:19.134 "superblock": true, 00:14:19.134 "num_base_bdevs": 4, 00:14:19.134 "num_base_bdevs_discovered": 3, 00:14:19.134 "num_base_bdevs_operational": 3, 00:14:19.134 "base_bdevs_list": [ 00:14:19.134 { 00:14:19.134 "name": "spare", 00:14:19.134 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:19.134 "is_configured": true, 00:14:19.134 "data_offset": 2048, 00:14:19.134 "data_size": 63488 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "name": null, 00:14:19.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.134 "is_configured": false, 00:14:19.134 "data_offset": 0, 00:14:19.134 "data_size": 63488 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "name": "BaseBdev3", 00:14:19.134 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:19.134 "is_configured": true, 00:14:19.134 "data_offset": 2048, 00:14:19.134 "data_size": 63488 00:14:19.134 }, 00:14:19.134 { 00:14:19.134 "name": "BaseBdev4", 00:14:19.134 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:19.134 "is_configured": true, 00:14:19.134 "data_offset": 2048, 00:14:19.134 "data_size": 63488 00:14:19.134 } 00:14:19.134 ] 00:14:19.134 }' 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.134 19:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.654 83.38 IOPS, 250.12 MiB/s [2024-12-12T19:43:02.499Z] 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.654 [2024-12-12 19:43:02.299223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.654 [2024-12-12 19:43:02.299263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.654 00:14:19.654 Latency(us) 00:14:19.654 [2024-12-12T19:43:02.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.654 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:19.654 raid_bdev1 : 8.23 82.46 247.37 0.00 0.00 16345.58 318.38 115847.04 00:14:19.654 [2024-12-12T19:43:02.499Z] =================================================================================================================== 00:14:19.654 [2024-12-12T19:43:02.499Z] Total : 82.46 247.37 0.00 0.00 16345.58 318.38 115847.04 00:14:19.654 [2024-12-12 19:43:02.415517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.654 [2024-12-12 19:43:02.415647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.654 [2024-12-12 19:43:02.415781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.654 [2024-12-12 19:43:02.415848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:19.654 { 00:14:19.654 "results": [ 00:14:19.654 { 00:14:19.654 "job": "raid_bdev1", 00:14:19.654 "core_mask": "0x1", 00:14:19.654 "workload": "randrw", 00:14:19.654 "percentage": 50, 00:14:19.654 "status": "finished", 00:14:19.654 "queue_depth": 2, 00:14:19.654 "io_size": 3145728, 00:14:19.654 "runtime": 8.234744, 00:14:19.654 "iops": 82.45550802793626, 00:14:19.654 "mibps": 247.36652408380877, 00:14:19.654 "io_failed": 0, 00:14:19.654 "io_timeout": 0, 00:14:19.654 "avg_latency_us": 16345.575715636272, 00:14:19.654 "min_latency_us": 318.37903930131006, 00:14:19.654 "max_latency_us": 115847.04279475982 00:14:19.654 } 00:14:19.654 ], 00:14:19.654 "core_count": 1 00:14:19.654 } 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.654 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:19.914 /dev/nbd0 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.914 1+0 records in 00:14:19.914 1+0 records out 00:14:19.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405866 s, 10.1 MB/s 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.914 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.915 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:20.175 /dev/nbd1 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.175 1+0 records in 00:14:20.175 1+0 records out 00:14:20.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533859 s, 7.7 MB/s 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.175 19:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.435 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.695 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:20.955 /dev/nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.955 1+0 records in 00:14:20.955 1+0 records out 00:14:20.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416065 s, 9.8 MB/s 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.955 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.215 19:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.475 [2024-12-12 19:43:04.147142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:21.475 [2024-12-12 19:43:04.147286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.475 [2024-12-12 19:43:04.147312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:21.475 [2024-12-12 19:43:04.147324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.475 [2024-12-12 19:43:04.149477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.475 [2024-12-12 19:43:04.149519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:21.475 [2024-12-12 19:43:04.149616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:21.475 [2024-12-12 19:43:04.149671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.475 [2024-12-12 19:43:04.149804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.475 [2024-12-12 19:43:04.149903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.475 spare 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.475 [2024-12-12 19:43:04.249783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:21.475 [2024-12-12 19:43:04.249813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:21.475 [2024-12-12 19:43:04.250066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:21.475 [2024-12-12 19:43:04.250246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:21.475 [2024-12-12 19:43:04.250258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:21.475 [2024-12-12 19:43:04.250415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.475 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.476 "name": "raid_bdev1", 00:14:21.476 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:21.476 "strip_size_kb": 0, 00:14:21.476 "state": "online", 00:14:21.476 "raid_level": "raid1", 00:14:21.476 "superblock": true, 00:14:21.476 "num_base_bdevs": 4, 00:14:21.476 "num_base_bdevs_discovered": 3, 00:14:21.476 "num_base_bdevs_operational": 3, 00:14:21.476 "base_bdevs_list": [ 00:14:21.476 { 00:14:21.476 "name": "spare", 00:14:21.476 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:21.476 "is_configured": true, 00:14:21.476 "data_offset": 2048, 00:14:21.476 "data_size": 63488 00:14:21.476 }, 00:14:21.476 { 00:14:21.476 "name": null, 00:14:21.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.476 "is_configured": false, 00:14:21.476 "data_offset": 2048, 00:14:21.476 "data_size": 63488 00:14:21.476 }, 00:14:21.476 { 00:14:21.476 "name": "BaseBdev3", 00:14:21.476 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:21.476 "is_configured": true, 00:14:21.476 "data_offset": 2048, 00:14:21.476 "data_size": 63488 00:14:21.476 }, 00:14:21.476 { 00:14:21.476 "name": "BaseBdev4", 00:14:21.476 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:21.476 "is_configured": true, 00:14:21.476 "data_offset": 2048, 00:14:21.476 "data_size": 63488 00:14:21.476 } 00:14:21.476 ] 00:14:21.476 }' 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.476 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.046 "name": "raid_bdev1", 00:14:22.046 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:22.046 "strip_size_kb": 0, 00:14:22.046 "state": "online", 00:14:22.046 "raid_level": "raid1", 00:14:22.046 "superblock": true, 00:14:22.046 "num_base_bdevs": 4, 00:14:22.046 "num_base_bdevs_discovered": 3, 00:14:22.046 "num_base_bdevs_operational": 3, 00:14:22.046 "base_bdevs_list": [ 00:14:22.046 { 00:14:22.046 "name": "spare", 00:14:22.046 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:22.046 "is_configured": true, 00:14:22.046 "data_offset": 2048, 00:14:22.046 "data_size": 63488 00:14:22.046 }, 00:14:22.046 { 00:14:22.046 "name": null, 00:14:22.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.046 "is_configured": false, 00:14:22.046 "data_offset": 2048, 00:14:22.046 "data_size": 63488 00:14:22.046 }, 00:14:22.046 { 00:14:22.046 "name": "BaseBdev3", 00:14:22.046 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:22.046 "is_configured": true, 00:14:22.046 "data_offset": 2048, 00:14:22.046 "data_size": 63488 00:14:22.046 }, 00:14:22.046 { 00:14:22.046 "name": "BaseBdev4", 00:14:22.046 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:22.046 "is_configured": true, 00:14:22.046 "data_offset": 2048, 00:14:22.046 "data_size": 63488 00:14:22.046 } 00:14:22.046 ] 00:14:22.046 }' 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.046 [2024-12-12 19:43:04.882220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.046 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.306 "name": "raid_bdev1", 00:14:22.306 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:22.306 "strip_size_kb": 0, 00:14:22.306 "state": "online", 00:14:22.306 "raid_level": "raid1", 00:14:22.306 "superblock": true, 00:14:22.306 "num_base_bdevs": 4, 00:14:22.306 "num_base_bdevs_discovered": 2, 00:14:22.306 "num_base_bdevs_operational": 2, 00:14:22.306 "base_bdevs_list": [ 00:14:22.306 { 00:14:22.306 "name": null, 00:14:22.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.306 "is_configured": false, 00:14:22.306 "data_offset": 0, 00:14:22.306 "data_size": 63488 00:14:22.306 }, 00:14:22.306 { 00:14:22.306 "name": null, 00:14:22.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.306 "is_configured": false, 00:14:22.306 "data_offset": 2048, 00:14:22.306 "data_size": 63488 00:14:22.306 }, 00:14:22.306 { 00:14:22.306 "name": "BaseBdev3", 00:14:22.306 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:22.306 "is_configured": true, 00:14:22.306 "data_offset": 2048, 00:14:22.306 "data_size": 63488 00:14:22.306 }, 00:14:22.306 { 00:14:22.306 "name": "BaseBdev4", 00:14:22.306 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:22.306 "is_configured": true, 00:14:22.306 "data_offset": 2048, 00:14:22.306 "data_size": 63488 00:14:22.306 } 00:14:22.306 ] 00:14:22.306 }' 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.306 19:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.564 19:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.564 19:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.564 19:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.564 [2024-12-12 19:43:05.317545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.564 [2024-12-12 19:43:05.317781] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:22.564 [2024-12-12 19:43:05.317841] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:22.565 [2024-12-12 19:43:05.317905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.565 [2024-12-12 19:43:05.332133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:22.565 19:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.565 19:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:22.565 [2024-12-12 19:43:05.333908] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.503 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.763 "name": "raid_bdev1", 00:14:23.763 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:23.763 "strip_size_kb": 0, 00:14:23.763 "state": "online", 00:14:23.763 "raid_level": "raid1", 00:14:23.763 "superblock": true, 00:14:23.763 "num_base_bdevs": 4, 00:14:23.763 "num_base_bdevs_discovered": 3, 00:14:23.763 "num_base_bdevs_operational": 3, 00:14:23.763 "process": { 00:14:23.763 "type": "rebuild", 00:14:23.763 "target": "spare", 00:14:23.763 "progress": { 00:14:23.763 "blocks": 20480, 00:14:23.763 "percent": 32 00:14:23.763 } 00:14:23.763 }, 00:14:23.763 "base_bdevs_list": [ 00:14:23.763 { 00:14:23.763 "name": "spare", 00:14:23.763 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:23.763 "is_configured": true, 00:14:23.763 "data_offset": 2048, 00:14:23.763 "data_size": 63488 00:14:23.763 }, 00:14:23.763 { 00:14:23.763 "name": null, 00:14:23.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.763 "is_configured": false, 00:14:23.763 "data_offset": 2048, 00:14:23.763 "data_size": 63488 00:14:23.763 }, 00:14:23.763 { 00:14:23.763 "name": "BaseBdev3", 00:14:23.763 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:23.763 "is_configured": true, 00:14:23.763 "data_offset": 2048, 00:14:23.763 "data_size": 63488 00:14:23.763 }, 00:14:23.763 { 00:14:23.763 "name": "BaseBdev4", 00:14:23.763 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:23.763 "is_configured": true, 00:14:23.763 "data_offset": 2048, 00:14:23.763 "data_size": 63488 00:14:23.763 } 00:14:23.763 ] 00:14:23.763 }' 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.763 [2024-12-12 19:43:06.497915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.763 [2024-12-12 19:43:06.538834] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.763 [2024-12-12 19:43:06.538894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.763 [2024-12-12 19:43:06.538909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.763 [2024-12-12 19:43:06.538918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.763 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.023 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.023 "name": "raid_bdev1", 00:14:24.023 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:24.023 "strip_size_kb": 0, 00:14:24.023 "state": "online", 00:14:24.023 "raid_level": "raid1", 00:14:24.023 "superblock": true, 00:14:24.023 "num_base_bdevs": 4, 00:14:24.023 "num_base_bdevs_discovered": 2, 00:14:24.023 "num_base_bdevs_operational": 2, 00:14:24.023 "base_bdevs_list": [ 00:14:24.023 { 00:14:24.023 "name": null, 00:14:24.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.023 "is_configured": false, 00:14:24.023 "data_offset": 0, 00:14:24.023 "data_size": 63488 00:14:24.023 }, 00:14:24.023 { 00:14:24.023 "name": null, 00:14:24.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.023 "is_configured": false, 00:14:24.023 "data_offset": 2048, 00:14:24.023 "data_size": 63488 00:14:24.023 }, 00:14:24.023 { 00:14:24.023 "name": "BaseBdev3", 00:14:24.023 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:24.023 "is_configured": true, 00:14:24.023 "data_offset": 2048, 00:14:24.023 "data_size": 63488 00:14:24.023 }, 00:14:24.023 { 00:14:24.023 "name": "BaseBdev4", 00:14:24.023 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:24.023 "is_configured": true, 00:14:24.023 "data_offset": 2048, 00:14:24.023 "data_size": 63488 00:14:24.023 } 00:14:24.023 ] 00:14:24.023 }' 00:14:24.023 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.023 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.283 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.283 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.283 19:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.283 [2024-12-12 19:43:06.993357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.283 [2024-12-12 19:43:06.993466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.283 [2024-12-12 19:43:06.993507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:24.283 [2024-12-12 19:43:06.993537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.283 [2024-12-12 19:43:06.994036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.283 [2024-12-12 19:43:06.994097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.283 [2024-12-12 19:43:06.994248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:24.283 [2024-12-12 19:43:06.994291] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:24.283 [2024-12-12 19:43:06.994349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:24.283 [2024-12-12 19:43:06.994395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.283 [2024-12-12 19:43:07.008721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:24.283 spare 00:14:24.283 19:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.283 [2024-12-12 19:43:07.010488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.283 19:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:25.222 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.222 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.223 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.482 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.482 "name": "raid_bdev1", 00:14:25.482 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:25.482 "strip_size_kb": 0, 00:14:25.482 "state": "online", 00:14:25.482 "raid_level": "raid1", 00:14:25.482 "superblock": true, 00:14:25.482 "num_base_bdevs": 4, 00:14:25.482 "num_base_bdevs_discovered": 3, 00:14:25.482 "num_base_bdevs_operational": 3, 00:14:25.482 "process": { 00:14:25.482 "type": "rebuild", 00:14:25.482 "target": "spare", 00:14:25.482 "progress": { 00:14:25.482 "blocks": 20480, 00:14:25.482 "percent": 32 00:14:25.482 } 00:14:25.482 }, 00:14:25.482 "base_bdevs_list": [ 00:14:25.482 { 00:14:25.482 "name": "spare", 00:14:25.482 "uuid": "5cb43683-2dbb-5b1d-98ad-15b641ef560d", 00:14:25.482 "is_configured": true, 00:14:25.482 "data_offset": 2048, 00:14:25.482 "data_size": 63488 00:14:25.482 }, 00:14:25.482 { 00:14:25.482 "name": null, 00:14:25.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.483 "is_configured": false, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 }, 00:14:25.483 { 00:14:25.483 "name": "BaseBdev3", 00:14:25.483 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:25.483 "is_configured": true, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 }, 00:14:25.483 { 00:14:25.483 "name": "BaseBdev4", 00:14:25.483 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:25.483 "is_configured": true, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 } 00:14:25.483 ] 00:14:25.483 }' 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 [2024-12-12 19:43:08.170441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.483 [2024-12-12 19:43:08.215312] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:25.483 [2024-12-12 19:43:08.215414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.483 [2024-12-12 19:43:08.215452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.483 [2024-12-12 19:43:08.215500] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.483 "name": "raid_bdev1", 00:14:25.483 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:25.483 "strip_size_kb": 0, 00:14:25.483 "state": "online", 00:14:25.483 "raid_level": "raid1", 00:14:25.483 "superblock": true, 00:14:25.483 "num_base_bdevs": 4, 00:14:25.483 "num_base_bdevs_discovered": 2, 00:14:25.483 "num_base_bdevs_operational": 2, 00:14:25.483 "base_bdevs_list": [ 00:14:25.483 { 00:14:25.483 "name": null, 00:14:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.483 "is_configured": false, 00:14:25.483 "data_offset": 0, 00:14:25.483 "data_size": 63488 00:14:25.483 }, 00:14:25.483 { 00:14:25.483 "name": null, 00:14:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.483 "is_configured": false, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 }, 00:14:25.483 { 00:14:25.483 "name": "BaseBdev3", 00:14:25.483 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:25.483 "is_configured": true, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 }, 00:14:25.483 { 00:14:25.483 "name": "BaseBdev4", 00:14:25.483 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:25.483 "is_configured": true, 00:14:25.483 "data_offset": 2048, 00:14:25.483 "data_size": 63488 00:14:25.483 } 00:14:25.483 ] 00:14:25.483 }' 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.483 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.053 "name": "raid_bdev1", 00:14:26.053 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:26.053 "strip_size_kb": 0, 00:14:26.053 "state": "online", 00:14:26.053 "raid_level": "raid1", 00:14:26.053 "superblock": true, 00:14:26.053 "num_base_bdevs": 4, 00:14:26.053 "num_base_bdevs_discovered": 2, 00:14:26.053 "num_base_bdevs_operational": 2, 00:14:26.053 "base_bdevs_list": [ 00:14:26.053 { 00:14:26.053 "name": null, 00:14:26.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.053 "is_configured": false, 00:14:26.053 "data_offset": 0, 00:14:26.053 "data_size": 63488 00:14:26.053 }, 00:14:26.053 { 00:14:26.053 "name": null, 00:14:26.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.053 "is_configured": false, 00:14:26.053 "data_offset": 2048, 00:14:26.053 "data_size": 63488 00:14:26.053 }, 00:14:26.053 { 00:14:26.053 "name": "BaseBdev3", 00:14:26.053 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:26.053 "is_configured": true, 00:14:26.053 "data_offset": 2048, 00:14:26.053 "data_size": 63488 00:14:26.053 }, 00:14:26.053 { 00:14:26.053 "name": "BaseBdev4", 00:14:26.053 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:26.053 "is_configured": true, 00:14:26.053 "data_offset": 2048, 00:14:26.053 "data_size": 63488 00:14:26.053 } 00:14:26.053 ] 00:14:26.053 }' 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.053 [2024-12-12 19:43:08.838359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:26.053 [2024-12-12 19:43:08.838415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.053 [2024-12-12 19:43:08.838452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:26.053 [2024-12-12 19:43:08.838460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.053 [2024-12-12 19:43:08.838900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.053 [2024-12-12 19:43:08.838924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:26.053 [2024-12-12 19:43:08.839018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:26.053 [2024-12-12 19:43:08.839032] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:26.053 [2024-12-12 19:43:08.839041] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:26.053 [2024-12-12 19:43:08.839050] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:26.053 BaseBdev1 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.053 19:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.022 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.281 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.281 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.281 "name": "raid_bdev1", 00:14:27.281 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:27.281 "strip_size_kb": 0, 00:14:27.281 "state": "online", 00:14:27.281 "raid_level": "raid1", 00:14:27.281 "superblock": true, 00:14:27.281 "num_base_bdevs": 4, 00:14:27.281 "num_base_bdevs_discovered": 2, 00:14:27.281 "num_base_bdevs_operational": 2, 00:14:27.281 "base_bdevs_list": [ 00:14:27.281 { 00:14:27.281 "name": null, 00:14:27.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.281 "is_configured": false, 00:14:27.281 "data_offset": 0, 00:14:27.281 "data_size": 63488 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": null, 00:14:27.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.281 "is_configured": false, 00:14:27.281 "data_offset": 2048, 00:14:27.281 "data_size": 63488 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": "BaseBdev3", 00:14:27.281 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:27.281 "is_configured": true, 00:14:27.281 "data_offset": 2048, 00:14:27.281 "data_size": 63488 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": "BaseBdev4", 00:14:27.281 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:27.281 "is_configured": true, 00:14:27.281 "data_offset": 2048, 00:14:27.281 "data_size": 63488 00:14:27.281 } 00:14:27.281 ] 00:14:27.281 }' 00:14:27.281 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.281 19:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.540 "name": "raid_bdev1", 00:14:27.540 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:27.540 "strip_size_kb": 0, 00:14:27.540 "state": "online", 00:14:27.540 "raid_level": "raid1", 00:14:27.540 "superblock": true, 00:14:27.540 "num_base_bdevs": 4, 00:14:27.540 "num_base_bdevs_discovered": 2, 00:14:27.540 "num_base_bdevs_operational": 2, 00:14:27.540 "base_bdevs_list": [ 00:14:27.540 { 00:14:27.540 "name": null, 00:14:27.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.540 "is_configured": false, 00:14:27.540 "data_offset": 0, 00:14:27.540 "data_size": 63488 00:14:27.540 }, 00:14:27.540 { 00:14:27.540 "name": null, 00:14:27.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.540 "is_configured": false, 00:14:27.540 "data_offset": 2048, 00:14:27.540 "data_size": 63488 00:14:27.540 }, 00:14:27.540 { 00:14:27.540 "name": "BaseBdev3", 00:14:27.540 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:27.540 "is_configured": true, 00:14:27.540 "data_offset": 2048, 00:14:27.540 "data_size": 63488 00:14:27.540 }, 00:14:27.540 { 00:14:27.540 "name": "BaseBdev4", 00:14:27.540 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:27.540 "is_configured": true, 00:14:27.540 "data_offset": 2048, 00:14:27.540 "data_size": 63488 00:14:27.540 } 00:14:27.540 ] 00:14:27.540 }' 00:14:27.540 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.800 [2024-12-12 19:43:10.467818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.800 [2024-12-12 19:43:10.468033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:27.800 [2024-12-12 19:43:10.468091] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:27.800 request: 00:14:27.800 { 00:14:27.800 "base_bdev": "BaseBdev1", 00:14:27.800 "raid_bdev": "raid_bdev1", 00:14:27.800 "method": "bdev_raid_add_base_bdev", 00:14:27.800 "req_id": 1 00:14:27.800 } 00:14:27.800 Got JSON-RPC error response 00:14:27.800 response: 00:14:27.800 { 00:14:27.800 "code": -22, 00:14:27.800 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:27.800 } 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.800 19:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.737 "name": "raid_bdev1", 00:14:28.737 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:28.737 "strip_size_kb": 0, 00:14:28.737 "state": "online", 00:14:28.737 "raid_level": "raid1", 00:14:28.737 "superblock": true, 00:14:28.737 "num_base_bdevs": 4, 00:14:28.737 "num_base_bdevs_discovered": 2, 00:14:28.737 "num_base_bdevs_operational": 2, 00:14:28.737 "base_bdevs_list": [ 00:14:28.737 { 00:14:28.737 "name": null, 00:14:28.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.737 "is_configured": false, 00:14:28.737 "data_offset": 0, 00:14:28.737 "data_size": 63488 00:14:28.737 }, 00:14:28.737 { 00:14:28.737 "name": null, 00:14:28.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.737 "is_configured": false, 00:14:28.737 "data_offset": 2048, 00:14:28.737 "data_size": 63488 00:14:28.737 }, 00:14:28.737 { 00:14:28.737 "name": "BaseBdev3", 00:14:28.737 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:28.737 "is_configured": true, 00:14:28.737 "data_offset": 2048, 00:14:28.737 "data_size": 63488 00:14:28.737 }, 00:14:28.737 { 00:14:28.737 "name": "BaseBdev4", 00:14:28.737 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:28.737 "is_configured": true, 00:14:28.737 "data_offset": 2048, 00:14:28.737 "data_size": 63488 00:14:28.737 } 00:14:28.737 ] 00:14:28.737 }' 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.737 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.306 "name": "raid_bdev1", 00:14:29.306 "uuid": "64b8602a-3176-4ed8-a907-c5cf3285e055", 00:14:29.306 "strip_size_kb": 0, 00:14:29.306 "state": "online", 00:14:29.306 "raid_level": "raid1", 00:14:29.306 "superblock": true, 00:14:29.306 "num_base_bdevs": 4, 00:14:29.306 "num_base_bdevs_discovered": 2, 00:14:29.306 "num_base_bdevs_operational": 2, 00:14:29.306 "base_bdevs_list": [ 00:14:29.306 { 00:14:29.306 "name": null, 00:14:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.306 "is_configured": false, 00:14:29.306 "data_offset": 0, 00:14:29.306 "data_size": 63488 00:14:29.306 }, 00:14:29.306 { 00:14:29.306 "name": null, 00:14:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.306 "is_configured": false, 00:14:29.306 "data_offset": 2048, 00:14:29.306 "data_size": 63488 00:14:29.306 }, 00:14:29.306 { 00:14:29.306 "name": "BaseBdev3", 00:14:29.306 "uuid": "6588fa7b-25dc-5f1c-b4fe-65c9fea99d73", 00:14:29.306 "is_configured": true, 00:14:29.306 "data_offset": 2048, 00:14:29.306 "data_size": 63488 00:14:29.306 }, 00:14:29.306 { 00:14:29.306 "name": "BaseBdev4", 00:14:29.306 "uuid": "8aedc29b-e325-5546-82b2-527351f2c5d7", 00:14:29.306 "is_configured": true, 00:14:29.306 "data_offset": 2048, 00:14:29.306 "data_size": 63488 00:14:29.306 } 00:14:29.306 ] 00:14:29.306 }' 00:14:29.306 19:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 80848 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 80848 ']' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 80848 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80848 00:14:29.306 killing process with pid 80848 00:14:29.306 Received shutdown signal, test time was about 17.960820 seconds 00:14:29.306 00:14:29.306 Latency(us) 00:14:29.306 [2024-12-12T19:43:12.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.306 [2024-12-12T19:43:12.151Z] =================================================================================================================== 00:14:29.306 [2024-12-12T19:43:12.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80848' 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 80848 00:14:29.306 [2024-12-12 19:43:12.102340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.306 [2024-12-12 19:43:12.102466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.306 19:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 80848 00:14:29.306 [2024-12-12 19:43:12.102533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.306 [2024-12-12 19:43:12.102546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:29.875 [2024-12-12 19:43:12.501041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.815 19:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:30.815 00:14:30.815 real 0m21.284s 00:14:30.815 user 0m27.810s 00:14:30.815 sys 0m2.603s 00:14:30.815 19:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.815 19:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 ************************************ 00:14:30.815 END TEST raid_rebuild_test_sb_io 00:14:30.815 ************************************ 00:14:31.075 19:43:13 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:31.075 19:43:13 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:31.075 19:43:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:31.075 19:43:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.075 19:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 ************************************ 00:14:31.075 START TEST raid5f_state_function_test 00:14:31.075 ************************************ 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:31.075 Process raid pid: 81568 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81568 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81568' 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81568 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81568 ']' 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.075 19:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.075 [2024-12-12 19:43:13.781636] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:31.075 [2024-12-12 19:43:13.781752] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.335 [2024-12-12 19:43:13.955528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.335 [2024-12-12 19:43:14.066468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.595 [2024-12-12 19:43:14.248218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.595 [2024-12-12 19:43:14.248254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.855 [2024-12-12 19:43:14.629379] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.855 [2024-12-12 19:43:14.629435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.855 [2024-12-12 19:43:14.629445] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.855 [2024-12-12 19:43:14.629454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.855 [2024-12-12 19:43:14.629460] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.855 [2024-12-12 19:43:14.629469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.855 "name": "Existed_Raid", 00:14:31.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.855 "strip_size_kb": 64, 00:14:31.855 "state": "configuring", 00:14:31.855 "raid_level": "raid5f", 00:14:31.855 "superblock": false, 00:14:31.855 "num_base_bdevs": 3, 00:14:31.855 "num_base_bdevs_discovered": 0, 00:14:31.855 "num_base_bdevs_operational": 3, 00:14:31.855 "base_bdevs_list": [ 00:14:31.855 { 00:14:31.855 "name": "BaseBdev1", 00:14:31.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.855 "is_configured": false, 00:14:31.855 "data_offset": 0, 00:14:31.855 "data_size": 0 00:14:31.855 }, 00:14:31.855 { 00:14:31.855 "name": "BaseBdev2", 00:14:31.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.855 "is_configured": false, 00:14:31.855 "data_offset": 0, 00:14:31.855 "data_size": 0 00:14:31.855 }, 00:14:31.855 { 00:14:31.855 "name": "BaseBdev3", 00:14:31.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.855 "is_configured": false, 00:14:31.855 "data_offset": 0, 00:14:31.855 "data_size": 0 00:14:31.855 } 00:14:31.855 ] 00:14:31.855 }' 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.855 19:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 [2024-12-12 19:43:15.056637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.423 [2024-12-12 19:43:15.056720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.423 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.424 [2024-12-12 19:43:15.068626] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.424 [2024-12-12 19:43:15.068707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.424 [2024-12-12 19:43:15.068733] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.424 [2024-12-12 19:43:15.068754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.424 [2024-12-12 19:43:15.068772] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.424 [2024-12-12 19:43:15.068791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.424 [2024-12-12 19:43:15.114436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.424 BaseBdev1 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.424 [ 00:14:32.424 { 00:14:32.424 "name": "BaseBdev1", 00:14:32.424 "aliases": [ 00:14:32.424 "5260e379-506b-4fde-816c-5aa2a22c5815" 00:14:32.424 ], 00:14:32.424 "product_name": "Malloc disk", 00:14:32.424 "block_size": 512, 00:14:32.424 "num_blocks": 65536, 00:14:32.424 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:32.424 "assigned_rate_limits": { 00:14:32.424 "rw_ios_per_sec": 0, 00:14:32.424 "rw_mbytes_per_sec": 0, 00:14:32.424 "r_mbytes_per_sec": 0, 00:14:32.424 "w_mbytes_per_sec": 0 00:14:32.424 }, 00:14:32.424 "claimed": true, 00:14:32.424 "claim_type": "exclusive_write", 00:14:32.424 "zoned": false, 00:14:32.424 "supported_io_types": { 00:14:32.424 "read": true, 00:14:32.424 "write": true, 00:14:32.424 "unmap": true, 00:14:32.424 "flush": true, 00:14:32.424 "reset": true, 00:14:32.424 "nvme_admin": false, 00:14:32.424 "nvme_io": false, 00:14:32.424 "nvme_io_md": false, 00:14:32.424 "write_zeroes": true, 00:14:32.424 "zcopy": true, 00:14:32.424 "get_zone_info": false, 00:14:32.424 "zone_management": false, 00:14:32.424 "zone_append": false, 00:14:32.424 "compare": false, 00:14:32.424 "compare_and_write": false, 00:14:32.424 "abort": true, 00:14:32.424 "seek_hole": false, 00:14:32.424 "seek_data": false, 00:14:32.424 "copy": true, 00:14:32.424 "nvme_iov_md": false 00:14:32.424 }, 00:14:32.424 "memory_domains": [ 00:14:32.424 { 00:14:32.424 "dma_device_id": "system", 00:14:32.424 "dma_device_type": 1 00:14:32.424 }, 00:14:32.424 { 00:14:32.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.424 "dma_device_type": 2 00:14:32.424 } 00:14:32.424 ], 00:14:32.424 "driver_specific": {} 00:14:32.424 } 00:14:32.424 ] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.424 "name": "Existed_Raid", 00:14:32.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.424 "strip_size_kb": 64, 00:14:32.424 "state": "configuring", 00:14:32.424 "raid_level": "raid5f", 00:14:32.424 "superblock": false, 00:14:32.424 "num_base_bdevs": 3, 00:14:32.424 "num_base_bdevs_discovered": 1, 00:14:32.424 "num_base_bdevs_operational": 3, 00:14:32.424 "base_bdevs_list": [ 00:14:32.424 { 00:14:32.424 "name": "BaseBdev1", 00:14:32.424 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:32.424 "is_configured": true, 00:14:32.424 "data_offset": 0, 00:14:32.424 "data_size": 65536 00:14:32.424 }, 00:14:32.424 { 00:14:32.424 "name": "BaseBdev2", 00:14:32.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.424 "is_configured": false, 00:14:32.424 "data_offset": 0, 00:14:32.424 "data_size": 0 00:14:32.424 }, 00:14:32.424 { 00:14:32.424 "name": "BaseBdev3", 00:14:32.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.424 "is_configured": false, 00:14:32.424 "data_offset": 0, 00:14:32.424 "data_size": 0 00:14:32.424 } 00:14:32.424 ] 00:14:32.424 }' 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.424 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 [2024-12-12 19:43:15.597788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.993 [2024-12-12 19:43:15.597874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 [2024-12-12 19:43:15.609809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.993 [2024-12-12 19:43:15.611578] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.993 [2024-12-12 19:43:15.611669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.993 [2024-12-12 19:43:15.611682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.993 [2024-12-12 19:43:15.611692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.993 "name": "Existed_Raid", 00:14:32.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.993 "strip_size_kb": 64, 00:14:32.993 "state": "configuring", 00:14:32.993 "raid_level": "raid5f", 00:14:32.993 "superblock": false, 00:14:32.993 "num_base_bdevs": 3, 00:14:32.993 "num_base_bdevs_discovered": 1, 00:14:32.993 "num_base_bdevs_operational": 3, 00:14:32.993 "base_bdevs_list": [ 00:14:32.993 { 00:14:32.993 "name": "BaseBdev1", 00:14:32.993 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:32.993 "is_configured": true, 00:14:32.993 "data_offset": 0, 00:14:32.993 "data_size": 65536 00:14:32.993 }, 00:14:32.993 { 00:14:32.993 "name": "BaseBdev2", 00:14:32.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.993 "is_configured": false, 00:14:32.993 "data_offset": 0, 00:14:32.993 "data_size": 0 00:14:32.993 }, 00:14:32.993 { 00:14:32.993 "name": "BaseBdev3", 00:14:32.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.993 "is_configured": false, 00:14:32.993 "data_offset": 0, 00:14:32.993 "data_size": 0 00:14:32.993 } 00:14:32.993 ] 00:14:32.993 }' 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.993 19:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.252 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.252 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.252 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 [2024-12-12 19:43:16.098038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.511 BaseBdev2 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 [ 00:14:33.511 { 00:14:33.511 "name": "BaseBdev2", 00:14:33.511 "aliases": [ 00:14:33.511 "070bd172-fb38-4ae2-8cf0-6588c4e61803" 00:14:33.511 ], 00:14:33.511 "product_name": "Malloc disk", 00:14:33.511 "block_size": 512, 00:14:33.511 "num_blocks": 65536, 00:14:33.511 "uuid": "070bd172-fb38-4ae2-8cf0-6588c4e61803", 00:14:33.511 "assigned_rate_limits": { 00:14:33.511 "rw_ios_per_sec": 0, 00:14:33.511 "rw_mbytes_per_sec": 0, 00:14:33.511 "r_mbytes_per_sec": 0, 00:14:33.511 "w_mbytes_per_sec": 0 00:14:33.511 }, 00:14:33.511 "claimed": true, 00:14:33.511 "claim_type": "exclusive_write", 00:14:33.511 "zoned": false, 00:14:33.511 "supported_io_types": { 00:14:33.511 "read": true, 00:14:33.511 "write": true, 00:14:33.511 "unmap": true, 00:14:33.511 "flush": true, 00:14:33.511 "reset": true, 00:14:33.511 "nvme_admin": false, 00:14:33.511 "nvme_io": false, 00:14:33.511 "nvme_io_md": false, 00:14:33.511 "write_zeroes": true, 00:14:33.511 "zcopy": true, 00:14:33.511 "get_zone_info": false, 00:14:33.511 "zone_management": false, 00:14:33.511 "zone_append": false, 00:14:33.511 "compare": false, 00:14:33.511 "compare_and_write": false, 00:14:33.511 "abort": true, 00:14:33.511 "seek_hole": false, 00:14:33.511 "seek_data": false, 00:14:33.511 "copy": true, 00:14:33.511 "nvme_iov_md": false 00:14:33.511 }, 00:14:33.511 "memory_domains": [ 00:14:33.511 { 00:14:33.511 "dma_device_id": "system", 00:14:33.511 "dma_device_type": 1 00:14:33.511 }, 00:14:33.511 { 00:14:33.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.511 "dma_device_type": 2 00:14:33.511 } 00:14:33.511 ], 00:14:33.511 "driver_specific": {} 00:14:33.511 } 00:14:33.511 ] 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.511 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.512 "name": "Existed_Raid", 00:14:33.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.512 "strip_size_kb": 64, 00:14:33.512 "state": "configuring", 00:14:33.512 "raid_level": "raid5f", 00:14:33.512 "superblock": false, 00:14:33.512 "num_base_bdevs": 3, 00:14:33.512 "num_base_bdevs_discovered": 2, 00:14:33.512 "num_base_bdevs_operational": 3, 00:14:33.512 "base_bdevs_list": [ 00:14:33.512 { 00:14:33.512 "name": "BaseBdev1", 00:14:33.512 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:33.512 "is_configured": true, 00:14:33.512 "data_offset": 0, 00:14:33.512 "data_size": 65536 00:14:33.512 }, 00:14:33.512 { 00:14:33.512 "name": "BaseBdev2", 00:14:33.512 "uuid": "070bd172-fb38-4ae2-8cf0-6588c4e61803", 00:14:33.512 "is_configured": true, 00:14:33.512 "data_offset": 0, 00:14:33.512 "data_size": 65536 00:14:33.512 }, 00:14:33.512 { 00:14:33.512 "name": "BaseBdev3", 00:14:33.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.512 "is_configured": false, 00:14:33.512 "data_offset": 0, 00:14:33.512 "data_size": 0 00:14:33.512 } 00:14:33.512 ] 00:14:33.512 }' 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.512 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.771 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.771 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.771 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.771 [2024-12-12 19:43:16.568474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.771 [2024-12-12 19:43:16.568570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:33.772 [2024-12-12 19:43:16.568588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:33.772 [2024-12-12 19:43:16.568858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:33.772 [2024-12-12 19:43:16.573928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:33.772 [2024-12-12 19:43:16.573986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:33.772 [2024-12-12 19:43:16.574304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.772 BaseBdev3 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.772 [ 00:14:33.772 { 00:14:33.772 "name": "BaseBdev3", 00:14:33.772 "aliases": [ 00:14:33.772 "6d3f855a-5b2e-48fb-8e80-34a81d5d54aa" 00:14:33.772 ], 00:14:33.772 "product_name": "Malloc disk", 00:14:33.772 "block_size": 512, 00:14:33.772 "num_blocks": 65536, 00:14:33.772 "uuid": "6d3f855a-5b2e-48fb-8e80-34a81d5d54aa", 00:14:33.772 "assigned_rate_limits": { 00:14:33.772 "rw_ios_per_sec": 0, 00:14:33.772 "rw_mbytes_per_sec": 0, 00:14:33.772 "r_mbytes_per_sec": 0, 00:14:33.772 "w_mbytes_per_sec": 0 00:14:33.772 }, 00:14:33.772 "claimed": true, 00:14:33.772 "claim_type": "exclusive_write", 00:14:33.772 "zoned": false, 00:14:33.772 "supported_io_types": { 00:14:33.772 "read": true, 00:14:33.772 "write": true, 00:14:33.772 "unmap": true, 00:14:33.772 "flush": true, 00:14:33.772 "reset": true, 00:14:33.772 "nvme_admin": false, 00:14:33.772 "nvme_io": false, 00:14:33.772 "nvme_io_md": false, 00:14:33.772 "write_zeroes": true, 00:14:33.772 "zcopy": true, 00:14:33.772 "get_zone_info": false, 00:14:33.772 "zone_management": false, 00:14:33.772 "zone_append": false, 00:14:33.772 "compare": false, 00:14:33.772 "compare_and_write": false, 00:14:33.772 "abort": true, 00:14:33.772 "seek_hole": false, 00:14:33.772 "seek_data": false, 00:14:33.772 "copy": true, 00:14:33.772 "nvme_iov_md": false 00:14:33.772 }, 00:14:33.772 "memory_domains": [ 00:14:33.772 { 00:14:33.772 "dma_device_id": "system", 00:14:33.772 "dma_device_type": 1 00:14:33.772 }, 00:14:33.772 { 00:14:33.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.772 "dma_device_type": 2 00:14:33.772 } 00:14:33.772 ], 00:14:33.772 "driver_specific": {} 00:14:33.772 } 00:14:33.772 ] 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.772 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.031 "name": "Existed_Raid", 00:14:34.031 "uuid": "e4aa7c80-aae2-4f1c-9a49-ce00beace0fe", 00:14:34.031 "strip_size_kb": 64, 00:14:34.031 "state": "online", 00:14:34.031 "raid_level": "raid5f", 00:14:34.031 "superblock": false, 00:14:34.031 "num_base_bdevs": 3, 00:14:34.031 "num_base_bdevs_discovered": 3, 00:14:34.031 "num_base_bdevs_operational": 3, 00:14:34.031 "base_bdevs_list": [ 00:14:34.031 { 00:14:34.031 "name": "BaseBdev1", 00:14:34.031 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:34.031 "is_configured": true, 00:14:34.031 "data_offset": 0, 00:14:34.031 "data_size": 65536 00:14:34.031 }, 00:14:34.031 { 00:14:34.031 "name": "BaseBdev2", 00:14:34.031 "uuid": "070bd172-fb38-4ae2-8cf0-6588c4e61803", 00:14:34.031 "is_configured": true, 00:14:34.031 "data_offset": 0, 00:14:34.031 "data_size": 65536 00:14:34.031 }, 00:14:34.031 { 00:14:34.031 "name": "BaseBdev3", 00:14:34.031 "uuid": "6d3f855a-5b2e-48fb-8e80-34a81d5d54aa", 00:14:34.031 "is_configured": true, 00:14:34.031 "data_offset": 0, 00:14:34.031 "data_size": 65536 00:14:34.031 } 00:14:34.031 ] 00:14:34.031 }' 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.031 19:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.291 [2024-12-12 19:43:17.079386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.291 "name": "Existed_Raid", 00:14:34.291 "aliases": [ 00:14:34.291 "e4aa7c80-aae2-4f1c-9a49-ce00beace0fe" 00:14:34.291 ], 00:14:34.291 "product_name": "Raid Volume", 00:14:34.291 "block_size": 512, 00:14:34.291 "num_blocks": 131072, 00:14:34.291 "uuid": "e4aa7c80-aae2-4f1c-9a49-ce00beace0fe", 00:14:34.291 "assigned_rate_limits": { 00:14:34.291 "rw_ios_per_sec": 0, 00:14:34.291 "rw_mbytes_per_sec": 0, 00:14:34.291 "r_mbytes_per_sec": 0, 00:14:34.291 "w_mbytes_per_sec": 0 00:14:34.291 }, 00:14:34.291 "claimed": false, 00:14:34.291 "zoned": false, 00:14:34.291 "supported_io_types": { 00:14:34.291 "read": true, 00:14:34.291 "write": true, 00:14:34.291 "unmap": false, 00:14:34.291 "flush": false, 00:14:34.291 "reset": true, 00:14:34.291 "nvme_admin": false, 00:14:34.291 "nvme_io": false, 00:14:34.291 "nvme_io_md": false, 00:14:34.291 "write_zeroes": true, 00:14:34.291 "zcopy": false, 00:14:34.291 "get_zone_info": false, 00:14:34.291 "zone_management": false, 00:14:34.291 "zone_append": false, 00:14:34.291 "compare": false, 00:14:34.291 "compare_and_write": false, 00:14:34.291 "abort": false, 00:14:34.291 "seek_hole": false, 00:14:34.291 "seek_data": false, 00:14:34.291 "copy": false, 00:14:34.291 "nvme_iov_md": false 00:14:34.291 }, 00:14:34.291 "driver_specific": { 00:14:34.291 "raid": { 00:14:34.291 "uuid": "e4aa7c80-aae2-4f1c-9a49-ce00beace0fe", 00:14:34.291 "strip_size_kb": 64, 00:14:34.291 "state": "online", 00:14:34.291 "raid_level": "raid5f", 00:14:34.291 "superblock": false, 00:14:34.291 "num_base_bdevs": 3, 00:14:34.291 "num_base_bdevs_discovered": 3, 00:14:34.291 "num_base_bdevs_operational": 3, 00:14:34.291 "base_bdevs_list": [ 00:14:34.291 { 00:14:34.291 "name": "BaseBdev1", 00:14:34.291 "uuid": "5260e379-506b-4fde-816c-5aa2a22c5815", 00:14:34.291 "is_configured": true, 00:14:34.291 "data_offset": 0, 00:14:34.291 "data_size": 65536 00:14:34.291 }, 00:14:34.291 { 00:14:34.291 "name": "BaseBdev2", 00:14:34.291 "uuid": "070bd172-fb38-4ae2-8cf0-6588c4e61803", 00:14:34.291 "is_configured": true, 00:14:34.291 "data_offset": 0, 00:14:34.291 "data_size": 65536 00:14:34.291 }, 00:14:34.291 { 00:14:34.291 "name": "BaseBdev3", 00:14:34.291 "uuid": "6d3f855a-5b2e-48fb-8e80-34a81d5d54aa", 00:14:34.291 "is_configured": true, 00:14:34.291 "data_offset": 0, 00:14:34.291 "data_size": 65536 00:14:34.291 } 00:14:34.291 ] 00:14:34.291 } 00:14:34.291 } 00:14:34.291 }' 00:14:34.291 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:34.551 BaseBdev2 00:14:34.551 BaseBdev3' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.551 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 [2024-12-12 19:43:17.374747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.811 "name": "Existed_Raid", 00:14:34.811 "uuid": "e4aa7c80-aae2-4f1c-9a49-ce00beace0fe", 00:14:34.811 "strip_size_kb": 64, 00:14:34.811 "state": "online", 00:14:34.811 "raid_level": "raid5f", 00:14:34.811 "superblock": false, 00:14:34.811 "num_base_bdevs": 3, 00:14:34.811 "num_base_bdevs_discovered": 2, 00:14:34.811 "num_base_bdevs_operational": 2, 00:14:34.811 "base_bdevs_list": [ 00:14:34.811 { 00:14:34.811 "name": null, 00:14:34.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.811 "is_configured": false, 00:14:34.811 "data_offset": 0, 00:14:34.811 "data_size": 65536 00:14:34.811 }, 00:14:34.811 { 00:14:34.811 "name": "BaseBdev2", 00:14:34.811 "uuid": "070bd172-fb38-4ae2-8cf0-6588c4e61803", 00:14:34.811 "is_configured": true, 00:14:34.811 "data_offset": 0, 00:14:34.811 "data_size": 65536 00:14:34.811 }, 00:14:34.811 { 00:14:34.811 "name": "BaseBdev3", 00:14:34.811 "uuid": "6d3f855a-5b2e-48fb-8e80-34a81d5d54aa", 00:14:34.811 "is_configured": true, 00:14:34.811 "data_offset": 0, 00:14:34.811 "data_size": 65536 00:14:34.811 } 00:14:34.811 ] 00:14:34.811 }' 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.811 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.070 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:35.070 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.330 19:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.330 [2024-12-12 19:43:17.968526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.330 [2024-12-12 19:43:17.968638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.330 [2024-12-12 19:43:18.058554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.330 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.331 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.331 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.331 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:35.331 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.331 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.331 [2024-12-12 19:43:18.114476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.331 [2024-12-12 19:43:18.114593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.591 BaseBdev2 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.591 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.591 [ 00:14:35.591 { 00:14:35.591 "name": "BaseBdev2", 00:14:35.591 "aliases": [ 00:14:35.591 "fc344e4b-b9ba-45eb-b70e-07025778e310" 00:14:35.591 ], 00:14:35.591 "product_name": "Malloc disk", 00:14:35.591 "block_size": 512, 00:14:35.591 "num_blocks": 65536, 00:14:35.591 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:35.591 "assigned_rate_limits": { 00:14:35.591 "rw_ios_per_sec": 0, 00:14:35.591 "rw_mbytes_per_sec": 0, 00:14:35.591 "r_mbytes_per_sec": 0, 00:14:35.591 "w_mbytes_per_sec": 0 00:14:35.591 }, 00:14:35.591 "claimed": false, 00:14:35.591 "zoned": false, 00:14:35.591 "supported_io_types": { 00:14:35.591 "read": true, 00:14:35.591 "write": true, 00:14:35.591 "unmap": true, 00:14:35.591 "flush": true, 00:14:35.591 "reset": true, 00:14:35.591 "nvme_admin": false, 00:14:35.591 "nvme_io": false, 00:14:35.591 "nvme_io_md": false, 00:14:35.591 "write_zeroes": true, 00:14:35.591 "zcopy": true, 00:14:35.591 "get_zone_info": false, 00:14:35.591 "zone_management": false, 00:14:35.591 "zone_append": false, 00:14:35.591 "compare": false, 00:14:35.591 "compare_and_write": false, 00:14:35.591 "abort": true, 00:14:35.591 "seek_hole": false, 00:14:35.591 "seek_data": false, 00:14:35.591 "copy": true, 00:14:35.592 "nvme_iov_md": false 00:14:35.592 }, 00:14:35.592 "memory_domains": [ 00:14:35.592 { 00:14:35.592 "dma_device_id": "system", 00:14:35.592 "dma_device_type": 1 00:14:35.592 }, 00:14:35.592 { 00:14:35.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.592 "dma_device_type": 2 00:14:35.592 } 00:14:35.592 ], 00:14:35.592 "driver_specific": {} 00:14:35.592 } 00:14:35.592 ] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 BaseBdev3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 [ 00:14:35.592 { 00:14:35.592 "name": "BaseBdev3", 00:14:35.592 "aliases": [ 00:14:35.592 "1da855a1-52e8-4d90-96e8-e5f650bed715" 00:14:35.592 ], 00:14:35.592 "product_name": "Malloc disk", 00:14:35.592 "block_size": 512, 00:14:35.592 "num_blocks": 65536, 00:14:35.592 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:35.592 "assigned_rate_limits": { 00:14:35.592 "rw_ios_per_sec": 0, 00:14:35.592 "rw_mbytes_per_sec": 0, 00:14:35.592 "r_mbytes_per_sec": 0, 00:14:35.592 "w_mbytes_per_sec": 0 00:14:35.592 }, 00:14:35.592 "claimed": false, 00:14:35.592 "zoned": false, 00:14:35.592 "supported_io_types": { 00:14:35.592 "read": true, 00:14:35.592 "write": true, 00:14:35.592 "unmap": true, 00:14:35.592 "flush": true, 00:14:35.592 "reset": true, 00:14:35.592 "nvme_admin": false, 00:14:35.592 "nvme_io": false, 00:14:35.592 "nvme_io_md": false, 00:14:35.592 "write_zeroes": true, 00:14:35.592 "zcopy": true, 00:14:35.592 "get_zone_info": false, 00:14:35.592 "zone_management": false, 00:14:35.592 "zone_append": false, 00:14:35.592 "compare": false, 00:14:35.592 "compare_and_write": false, 00:14:35.592 "abort": true, 00:14:35.592 "seek_hole": false, 00:14:35.592 "seek_data": false, 00:14:35.592 "copy": true, 00:14:35.592 "nvme_iov_md": false 00:14:35.592 }, 00:14:35.592 "memory_domains": [ 00:14:35.592 { 00:14:35.592 "dma_device_id": "system", 00:14:35.592 "dma_device_type": 1 00:14:35.592 }, 00:14:35.592 { 00:14:35.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.592 "dma_device_type": 2 00:14:35.592 } 00:14:35.592 ], 00:14:35.592 "driver_specific": {} 00:14:35.592 } 00:14:35.592 ] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 [2024-12-12 19:43:18.415737] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.592 [2024-12-12 19:43:18.415850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.592 [2024-12-12 19:43:18.415888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.592 [2024-12-12 19:43:18.417657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.592 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.852 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.852 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.852 "name": "Existed_Raid", 00:14:35.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.852 "strip_size_kb": 64, 00:14:35.852 "state": "configuring", 00:14:35.852 "raid_level": "raid5f", 00:14:35.852 "superblock": false, 00:14:35.852 "num_base_bdevs": 3, 00:14:35.852 "num_base_bdevs_discovered": 2, 00:14:35.852 "num_base_bdevs_operational": 3, 00:14:35.852 "base_bdevs_list": [ 00:14:35.852 { 00:14:35.852 "name": "BaseBdev1", 00:14:35.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.852 "is_configured": false, 00:14:35.852 "data_offset": 0, 00:14:35.852 "data_size": 0 00:14:35.852 }, 00:14:35.852 { 00:14:35.852 "name": "BaseBdev2", 00:14:35.852 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:35.852 "is_configured": true, 00:14:35.852 "data_offset": 0, 00:14:35.852 "data_size": 65536 00:14:35.852 }, 00:14:35.852 { 00:14:35.852 "name": "BaseBdev3", 00:14:35.852 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:35.852 "is_configured": true, 00:14:35.852 "data_offset": 0, 00:14:35.852 "data_size": 65536 00:14:35.852 } 00:14:35.852 ] 00:14:35.852 }' 00:14:35.852 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.852 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.111 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.112 [2024-12-12 19:43:18.862967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.112 "name": "Existed_Raid", 00:14:36.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.112 "strip_size_kb": 64, 00:14:36.112 "state": "configuring", 00:14:36.112 "raid_level": "raid5f", 00:14:36.112 "superblock": false, 00:14:36.112 "num_base_bdevs": 3, 00:14:36.112 "num_base_bdevs_discovered": 1, 00:14:36.112 "num_base_bdevs_operational": 3, 00:14:36.112 "base_bdevs_list": [ 00:14:36.112 { 00:14:36.112 "name": "BaseBdev1", 00:14:36.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.112 "is_configured": false, 00:14:36.112 "data_offset": 0, 00:14:36.112 "data_size": 0 00:14:36.112 }, 00:14:36.112 { 00:14:36.112 "name": null, 00:14:36.112 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:36.112 "is_configured": false, 00:14:36.112 "data_offset": 0, 00:14:36.112 "data_size": 65536 00:14:36.112 }, 00:14:36.112 { 00:14:36.112 "name": "BaseBdev3", 00:14:36.112 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:36.112 "is_configured": true, 00:14:36.112 "data_offset": 0, 00:14:36.112 "data_size": 65536 00:14:36.112 } 00:14:36.112 ] 00:14:36.112 }' 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.112 19:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 [2024-12-12 19:43:19.388859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.682 BaseBdev1 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 [ 00:14:36.682 { 00:14:36.682 "name": "BaseBdev1", 00:14:36.682 "aliases": [ 00:14:36.682 "806b34a4-6134-4ed6-aa92-1e7318986ad9" 00:14:36.682 ], 00:14:36.682 "product_name": "Malloc disk", 00:14:36.682 "block_size": 512, 00:14:36.682 "num_blocks": 65536, 00:14:36.682 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:36.682 "assigned_rate_limits": { 00:14:36.682 "rw_ios_per_sec": 0, 00:14:36.682 "rw_mbytes_per_sec": 0, 00:14:36.682 "r_mbytes_per_sec": 0, 00:14:36.682 "w_mbytes_per_sec": 0 00:14:36.682 }, 00:14:36.682 "claimed": true, 00:14:36.682 "claim_type": "exclusive_write", 00:14:36.682 "zoned": false, 00:14:36.682 "supported_io_types": { 00:14:36.682 "read": true, 00:14:36.682 "write": true, 00:14:36.682 "unmap": true, 00:14:36.682 "flush": true, 00:14:36.682 "reset": true, 00:14:36.682 "nvme_admin": false, 00:14:36.682 "nvme_io": false, 00:14:36.682 "nvme_io_md": false, 00:14:36.682 "write_zeroes": true, 00:14:36.682 "zcopy": true, 00:14:36.682 "get_zone_info": false, 00:14:36.682 "zone_management": false, 00:14:36.682 "zone_append": false, 00:14:36.682 "compare": false, 00:14:36.682 "compare_and_write": false, 00:14:36.682 "abort": true, 00:14:36.682 "seek_hole": false, 00:14:36.682 "seek_data": false, 00:14:36.682 "copy": true, 00:14:36.682 "nvme_iov_md": false 00:14:36.682 }, 00:14:36.682 "memory_domains": [ 00:14:36.682 { 00:14:36.682 "dma_device_id": "system", 00:14:36.682 "dma_device_type": 1 00:14:36.682 }, 00:14:36.682 { 00:14:36.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.682 "dma_device_type": 2 00:14:36.682 } 00:14:36.682 ], 00:14:36.682 "driver_specific": {} 00:14:36.682 } 00:14:36.682 ] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.682 "name": "Existed_Raid", 00:14:36.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.682 "strip_size_kb": 64, 00:14:36.682 "state": "configuring", 00:14:36.682 "raid_level": "raid5f", 00:14:36.682 "superblock": false, 00:14:36.682 "num_base_bdevs": 3, 00:14:36.682 "num_base_bdevs_discovered": 2, 00:14:36.682 "num_base_bdevs_operational": 3, 00:14:36.682 "base_bdevs_list": [ 00:14:36.682 { 00:14:36.682 "name": "BaseBdev1", 00:14:36.682 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:36.682 "is_configured": true, 00:14:36.682 "data_offset": 0, 00:14:36.682 "data_size": 65536 00:14:36.682 }, 00:14:36.682 { 00:14:36.682 "name": null, 00:14:36.682 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:36.682 "is_configured": false, 00:14:36.682 "data_offset": 0, 00:14:36.682 "data_size": 65536 00:14:36.682 }, 00:14:36.682 { 00:14:36.682 "name": "BaseBdev3", 00:14:36.682 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:36.682 "is_configured": true, 00:14:36.682 "data_offset": 0, 00:14:36.682 "data_size": 65536 00:14:36.682 } 00:14:36.682 ] 00:14:36.682 }' 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.682 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.252 [2024-12-12 19:43:19.892026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.252 "name": "Existed_Raid", 00:14:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.252 "strip_size_kb": 64, 00:14:37.252 "state": "configuring", 00:14:37.252 "raid_level": "raid5f", 00:14:37.252 "superblock": false, 00:14:37.252 "num_base_bdevs": 3, 00:14:37.252 "num_base_bdevs_discovered": 1, 00:14:37.252 "num_base_bdevs_operational": 3, 00:14:37.252 "base_bdevs_list": [ 00:14:37.252 { 00:14:37.252 "name": "BaseBdev1", 00:14:37.252 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:37.252 "is_configured": true, 00:14:37.252 "data_offset": 0, 00:14:37.252 "data_size": 65536 00:14:37.252 }, 00:14:37.252 { 00:14:37.252 "name": null, 00:14:37.252 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:37.252 "is_configured": false, 00:14:37.252 "data_offset": 0, 00:14:37.252 "data_size": 65536 00:14:37.252 }, 00:14:37.252 { 00:14:37.252 "name": null, 00:14:37.252 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:37.252 "is_configured": false, 00:14:37.252 "data_offset": 0, 00:14:37.252 "data_size": 65536 00:14:37.252 } 00:14:37.252 ] 00:14:37.252 }' 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.252 19:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.511 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.511 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.511 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.511 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.770 [2024-12-12 19:43:20.407166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.770 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.771 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.771 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.771 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.771 "name": "Existed_Raid", 00:14:37.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.771 "strip_size_kb": 64, 00:14:37.771 "state": "configuring", 00:14:37.771 "raid_level": "raid5f", 00:14:37.771 "superblock": false, 00:14:37.771 "num_base_bdevs": 3, 00:14:37.771 "num_base_bdevs_discovered": 2, 00:14:37.771 "num_base_bdevs_operational": 3, 00:14:37.771 "base_bdevs_list": [ 00:14:37.771 { 00:14:37.771 "name": "BaseBdev1", 00:14:37.771 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:37.771 "is_configured": true, 00:14:37.771 "data_offset": 0, 00:14:37.771 "data_size": 65536 00:14:37.771 }, 00:14:37.771 { 00:14:37.771 "name": null, 00:14:37.771 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:37.771 "is_configured": false, 00:14:37.771 "data_offset": 0, 00:14:37.771 "data_size": 65536 00:14:37.771 }, 00:14:37.771 { 00:14:37.771 "name": "BaseBdev3", 00:14:37.771 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:37.771 "is_configured": true, 00:14:37.771 "data_offset": 0, 00:14:37.771 "data_size": 65536 00:14:37.771 } 00:14:37.771 ] 00:14:37.771 }' 00:14:37.771 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.771 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.031 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 [2024-12-12 19:43:20.830448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.290 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.291 "name": "Existed_Raid", 00:14:38.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.291 "strip_size_kb": 64, 00:14:38.291 "state": "configuring", 00:14:38.291 "raid_level": "raid5f", 00:14:38.291 "superblock": false, 00:14:38.291 "num_base_bdevs": 3, 00:14:38.291 "num_base_bdevs_discovered": 1, 00:14:38.291 "num_base_bdevs_operational": 3, 00:14:38.291 "base_bdevs_list": [ 00:14:38.291 { 00:14:38.291 "name": null, 00:14:38.291 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:38.291 "is_configured": false, 00:14:38.291 "data_offset": 0, 00:14:38.291 "data_size": 65536 00:14:38.291 }, 00:14:38.291 { 00:14:38.291 "name": null, 00:14:38.291 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:38.291 "is_configured": false, 00:14:38.291 "data_offset": 0, 00:14:38.291 "data_size": 65536 00:14:38.291 }, 00:14:38.291 { 00:14:38.291 "name": "BaseBdev3", 00:14:38.291 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:38.291 "is_configured": true, 00:14:38.291 "data_offset": 0, 00:14:38.291 "data_size": 65536 00:14:38.291 } 00:14:38.291 ] 00:14:38.291 }' 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.291 19:43:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.550 [2024-12-12 19:43:21.350998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.550 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.809 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.809 "name": "Existed_Raid", 00:14:38.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.809 "strip_size_kb": 64, 00:14:38.809 "state": "configuring", 00:14:38.809 "raid_level": "raid5f", 00:14:38.809 "superblock": false, 00:14:38.809 "num_base_bdevs": 3, 00:14:38.809 "num_base_bdevs_discovered": 2, 00:14:38.809 "num_base_bdevs_operational": 3, 00:14:38.809 "base_bdevs_list": [ 00:14:38.809 { 00:14:38.809 "name": null, 00:14:38.809 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:38.809 "is_configured": false, 00:14:38.809 "data_offset": 0, 00:14:38.809 "data_size": 65536 00:14:38.809 }, 00:14:38.809 { 00:14:38.809 "name": "BaseBdev2", 00:14:38.809 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:38.809 "is_configured": true, 00:14:38.809 "data_offset": 0, 00:14:38.809 "data_size": 65536 00:14:38.809 }, 00:14:38.809 { 00:14:38.809 "name": "BaseBdev3", 00:14:38.809 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:38.809 "is_configured": true, 00:14:38.809 "data_offset": 0, 00:14:38.809 "data_size": 65536 00:14:38.809 } 00:14:38.809 ] 00:14:38.809 }' 00:14:38.809 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.809 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 806b34a4-6134-4ed6-aa92-1e7318986ad9 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.069 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.069 [2024-12-12 19:43:21.904609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:39.069 [2024-12-12 19:43:21.904710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:39.069 [2024-12-12 19:43:21.904725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:39.069 [2024-12-12 19:43:21.904977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:39.069 [2024-12-12 19:43:21.909782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:39.069 [2024-12-12 19:43:21.909803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:39.069 [2024-12-12 19:43:21.910055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.329 NewBaseBdev 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.329 [ 00:14:39.329 { 00:14:39.329 "name": "NewBaseBdev", 00:14:39.329 "aliases": [ 00:14:39.329 "806b34a4-6134-4ed6-aa92-1e7318986ad9" 00:14:39.329 ], 00:14:39.329 "product_name": "Malloc disk", 00:14:39.329 "block_size": 512, 00:14:39.329 "num_blocks": 65536, 00:14:39.329 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:39.329 "assigned_rate_limits": { 00:14:39.329 "rw_ios_per_sec": 0, 00:14:39.329 "rw_mbytes_per_sec": 0, 00:14:39.329 "r_mbytes_per_sec": 0, 00:14:39.329 "w_mbytes_per_sec": 0 00:14:39.329 }, 00:14:39.329 "claimed": true, 00:14:39.329 "claim_type": "exclusive_write", 00:14:39.329 "zoned": false, 00:14:39.329 "supported_io_types": { 00:14:39.329 "read": true, 00:14:39.329 "write": true, 00:14:39.329 "unmap": true, 00:14:39.329 "flush": true, 00:14:39.329 "reset": true, 00:14:39.329 "nvme_admin": false, 00:14:39.329 "nvme_io": false, 00:14:39.329 "nvme_io_md": false, 00:14:39.329 "write_zeroes": true, 00:14:39.329 "zcopy": true, 00:14:39.329 "get_zone_info": false, 00:14:39.329 "zone_management": false, 00:14:39.329 "zone_append": false, 00:14:39.329 "compare": false, 00:14:39.329 "compare_and_write": false, 00:14:39.329 "abort": true, 00:14:39.329 "seek_hole": false, 00:14:39.329 "seek_data": false, 00:14:39.329 "copy": true, 00:14:39.329 "nvme_iov_md": false 00:14:39.329 }, 00:14:39.329 "memory_domains": [ 00:14:39.329 { 00:14:39.329 "dma_device_id": "system", 00:14:39.329 "dma_device_type": 1 00:14:39.329 }, 00:14:39.329 { 00:14:39.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.329 "dma_device_type": 2 00:14:39.329 } 00:14:39.329 ], 00:14:39.329 "driver_specific": {} 00:14:39.329 } 00:14:39.329 ] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.329 "name": "Existed_Raid", 00:14:39.329 "uuid": "d5c4c043-2d36-4591-a6ae-0f7e477ec4b0", 00:14:39.329 "strip_size_kb": 64, 00:14:39.329 "state": "online", 00:14:39.329 "raid_level": "raid5f", 00:14:39.329 "superblock": false, 00:14:39.329 "num_base_bdevs": 3, 00:14:39.329 "num_base_bdevs_discovered": 3, 00:14:39.329 "num_base_bdevs_operational": 3, 00:14:39.329 "base_bdevs_list": [ 00:14:39.329 { 00:14:39.329 "name": "NewBaseBdev", 00:14:39.329 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:39.329 "is_configured": true, 00:14:39.329 "data_offset": 0, 00:14:39.329 "data_size": 65536 00:14:39.329 }, 00:14:39.329 { 00:14:39.329 "name": "BaseBdev2", 00:14:39.329 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:39.329 "is_configured": true, 00:14:39.329 "data_offset": 0, 00:14:39.329 "data_size": 65536 00:14:39.329 }, 00:14:39.329 { 00:14:39.329 "name": "BaseBdev3", 00:14:39.329 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:39.329 "is_configured": true, 00:14:39.329 "data_offset": 0, 00:14:39.329 "data_size": 65536 00:14:39.329 } 00:14:39.329 ] 00:14:39.329 }' 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.329 19:43:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.588 [2024-12-12 19:43:22.395802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.588 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.588 "name": "Existed_Raid", 00:14:39.588 "aliases": [ 00:14:39.588 "d5c4c043-2d36-4591-a6ae-0f7e477ec4b0" 00:14:39.588 ], 00:14:39.588 "product_name": "Raid Volume", 00:14:39.588 "block_size": 512, 00:14:39.588 "num_blocks": 131072, 00:14:39.588 "uuid": "d5c4c043-2d36-4591-a6ae-0f7e477ec4b0", 00:14:39.588 "assigned_rate_limits": { 00:14:39.588 "rw_ios_per_sec": 0, 00:14:39.588 "rw_mbytes_per_sec": 0, 00:14:39.588 "r_mbytes_per_sec": 0, 00:14:39.588 "w_mbytes_per_sec": 0 00:14:39.588 }, 00:14:39.588 "claimed": false, 00:14:39.588 "zoned": false, 00:14:39.588 "supported_io_types": { 00:14:39.588 "read": true, 00:14:39.588 "write": true, 00:14:39.588 "unmap": false, 00:14:39.588 "flush": false, 00:14:39.588 "reset": true, 00:14:39.588 "nvme_admin": false, 00:14:39.588 "nvme_io": false, 00:14:39.588 "nvme_io_md": false, 00:14:39.588 "write_zeroes": true, 00:14:39.588 "zcopy": false, 00:14:39.588 "get_zone_info": false, 00:14:39.588 "zone_management": false, 00:14:39.588 "zone_append": false, 00:14:39.588 "compare": false, 00:14:39.588 "compare_and_write": false, 00:14:39.588 "abort": false, 00:14:39.588 "seek_hole": false, 00:14:39.588 "seek_data": false, 00:14:39.588 "copy": false, 00:14:39.588 "nvme_iov_md": false 00:14:39.588 }, 00:14:39.588 "driver_specific": { 00:14:39.588 "raid": { 00:14:39.588 "uuid": "d5c4c043-2d36-4591-a6ae-0f7e477ec4b0", 00:14:39.588 "strip_size_kb": 64, 00:14:39.588 "state": "online", 00:14:39.588 "raid_level": "raid5f", 00:14:39.588 "superblock": false, 00:14:39.588 "num_base_bdevs": 3, 00:14:39.588 "num_base_bdevs_discovered": 3, 00:14:39.588 "num_base_bdevs_operational": 3, 00:14:39.588 "base_bdevs_list": [ 00:14:39.588 { 00:14:39.588 "name": "NewBaseBdev", 00:14:39.588 "uuid": "806b34a4-6134-4ed6-aa92-1e7318986ad9", 00:14:39.588 "is_configured": true, 00:14:39.589 "data_offset": 0, 00:14:39.589 "data_size": 65536 00:14:39.589 }, 00:14:39.589 { 00:14:39.589 "name": "BaseBdev2", 00:14:39.589 "uuid": "fc344e4b-b9ba-45eb-b70e-07025778e310", 00:14:39.589 "is_configured": true, 00:14:39.589 "data_offset": 0, 00:14:39.589 "data_size": 65536 00:14:39.589 }, 00:14:39.589 { 00:14:39.589 "name": "BaseBdev3", 00:14:39.589 "uuid": "1da855a1-52e8-4d90-96e8-e5f650bed715", 00:14:39.589 "is_configured": true, 00:14:39.589 "data_offset": 0, 00:14:39.589 "data_size": 65536 00:14:39.589 } 00:14:39.589 ] 00:14:39.589 } 00:14:39.589 } 00:14:39.589 }' 00:14:39.589 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:39.848 BaseBdev2 00:14:39.848 BaseBdev3' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.848 [2024-12-12 19:43:22.663128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.848 [2024-12-12 19:43:22.663153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.848 [2024-12-12 19:43:22.663211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.848 [2024-12-12 19:43:22.663465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.848 [2024-12-12 19:43:22.663477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81568 00:14:39.848 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81568 ']' 00:14:39.849 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81568 00:14:39.849 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:39.849 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.849 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81568 00:14:40.108 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.108 killing process with pid 81568 00:14:40.108 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.108 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81568' 00:14:40.108 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 81568 00:14:40.108 [2024-12-12 19:43:22.703428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.108 19:43:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 81568 00:14:40.368 [2024-12-12 19:43:22.987187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:41.347 00:14:41.347 real 0m10.358s 00:14:41.347 user 0m16.470s 00:14:41.347 sys 0m1.911s 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.347 ************************************ 00:14:41.347 END TEST raid5f_state_function_test 00:14:41.347 ************************************ 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.347 19:43:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:41.347 19:43:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:41.347 19:43:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.347 19:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.347 ************************************ 00:14:41.347 START TEST raid5f_state_function_test_sb 00:14:41.347 ************************************ 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.347 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82189 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82189' 00:14:41.348 Process raid pid: 82189 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82189 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82189 ']' 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.348 19:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.619 [2024-12-12 19:43:24.217909] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:41.619 [2024-12-12 19:43:24.218072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.619 [2024-12-12 19:43:24.385136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.878 [2024-12-12 19:43:24.493968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.878 [2024-12-12 19:43:24.686333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.878 [2024-12-12 19:43:24.686372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.454 [2024-12-12 19:43:25.053524] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.454 [2024-12-12 19:43:25.053585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.454 [2024-12-12 19:43:25.053595] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.454 [2024-12-12 19:43:25.053605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.454 [2024-12-12 19:43:25.053611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.454 [2024-12-12 19:43:25.053619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.454 "name": "Existed_Raid", 00:14:42.454 "uuid": "d56c2761-3b79-4095-82e5-311b255388ae", 00:14:42.454 "strip_size_kb": 64, 00:14:42.454 "state": "configuring", 00:14:42.454 "raid_level": "raid5f", 00:14:42.454 "superblock": true, 00:14:42.454 "num_base_bdevs": 3, 00:14:42.454 "num_base_bdevs_discovered": 0, 00:14:42.454 "num_base_bdevs_operational": 3, 00:14:42.454 "base_bdevs_list": [ 00:14:42.454 { 00:14:42.454 "name": "BaseBdev1", 00:14:42.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.454 "is_configured": false, 00:14:42.454 "data_offset": 0, 00:14:42.454 "data_size": 0 00:14:42.454 }, 00:14:42.454 { 00:14:42.454 "name": "BaseBdev2", 00:14:42.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.454 "is_configured": false, 00:14:42.454 "data_offset": 0, 00:14:42.454 "data_size": 0 00:14:42.454 }, 00:14:42.454 { 00:14:42.454 "name": "BaseBdev3", 00:14:42.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.454 "is_configured": false, 00:14:42.454 "data_offset": 0, 00:14:42.454 "data_size": 0 00:14:42.454 } 00:14:42.454 ] 00:14:42.454 }' 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.454 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.714 [2024-12-12 19:43:25.508689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.714 [2024-12-12 19:43:25.508776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.714 [2024-12-12 19:43:25.520692] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.714 [2024-12-12 19:43:25.520775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.714 [2024-12-12 19:43:25.520802] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.714 [2024-12-12 19:43:25.520822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.714 [2024-12-12 19:43:25.520838] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.714 [2024-12-12 19:43:25.520856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.714 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.974 [2024-12-12 19:43:25.566442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.974 BaseBdev1 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.974 [ 00:14:42.974 { 00:14:42.974 "name": "BaseBdev1", 00:14:42.974 "aliases": [ 00:14:42.974 "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0" 00:14:42.974 ], 00:14:42.974 "product_name": "Malloc disk", 00:14:42.974 "block_size": 512, 00:14:42.974 "num_blocks": 65536, 00:14:42.974 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:42.974 "assigned_rate_limits": { 00:14:42.974 "rw_ios_per_sec": 0, 00:14:42.974 "rw_mbytes_per_sec": 0, 00:14:42.974 "r_mbytes_per_sec": 0, 00:14:42.974 "w_mbytes_per_sec": 0 00:14:42.974 }, 00:14:42.974 "claimed": true, 00:14:42.974 "claim_type": "exclusive_write", 00:14:42.974 "zoned": false, 00:14:42.974 "supported_io_types": { 00:14:42.974 "read": true, 00:14:42.974 "write": true, 00:14:42.974 "unmap": true, 00:14:42.974 "flush": true, 00:14:42.974 "reset": true, 00:14:42.974 "nvme_admin": false, 00:14:42.974 "nvme_io": false, 00:14:42.974 "nvme_io_md": false, 00:14:42.974 "write_zeroes": true, 00:14:42.974 "zcopy": true, 00:14:42.974 "get_zone_info": false, 00:14:42.974 "zone_management": false, 00:14:42.974 "zone_append": false, 00:14:42.974 "compare": false, 00:14:42.974 "compare_and_write": false, 00:14:42.974 "abort": true, 00:14:42.974 "seek_hole": false, 00:14:42.974 "seek_data": false, 00:14:42.974 "copy": true, 00:14:42.974 "nvme_iov_md": false 00:14:42.974 }, 00:14:42.974 "memory_domains": [ 00:14:42.974 { 00:14:42.974 "dma_device_id": "system", 00:14:42.974 "dma_device_type": 1 00:14:42.974 }, 00:14:42.974 { 00:14:42.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.974 "dma_device_type": 2 00:14:42.974 } 00:14:42.974 ], 00:14:42.974 "driver_specific": {} 00:14:42.974 } 00:14:42.974 ] 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.974 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.975 "name": "Existed_Raid", 00:14:42.975 "uuid": "42d4d470-0922-4388-ba05-b2853539b106", 00:14:42.975 "strip_size_kb": 64, 00:14:42.975 "state": "configuring", 00:14:42.975 "raid_level": "raid5f", 00:14:42.975 "superblock": true, 00:14:42.975 "num_base_bdevs": 3, 00:14:42.975 "num_base_bdevs_discovered": 1, 00:14:42.975 "num_base_bdevs_operational": 3, 00:14:42.975 "base_bdevs_list": [ 00:14:42.975 { 00:14:42.975 "name": "BaseBdev1", 00:14:42.975 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:42.975 "is_configured": true, 00:14:42.975 "data_offset": 2048, 00:14:42.975 "data_size": 63488 00:14:42.975 }, 00:14:42.975 { 00:14:42.975 "name": "BaseBdev2", 00:14:42.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.975 "is_configured": false, 00:14:42.975 "data_offset": 0, 00:14:42.975 "data_size": 0 00:14:42.975 }, 00:14:42.975 { 00:14:42.975 "name": "BaseBdev3", 00:14:42.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.975 "is_configured": false, 00:14:42.975 "data_offset": 0, 00:14:42.975 "data_size": 0 00:14:42.975 } 00:14:42.975 ] 00:14:42.975 }' 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.975 19:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.234 [2024-12-12 19:43:26.065663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.234 [2024-12-12 19:43:26.065752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.234 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.234 [2024-12-12 19:43:26.077703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.494 [2024-12-12 19:43:26.079514] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.494 [2024-12-12 19:43:26.079604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.494 [2024-12-12 19:43:26.079633] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.494 [2024-12-12 19:43:26.079656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.494 "name": "Existed_Raid", 00:14:43.494 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:43.494 "strip_size_kb": 64, 00:14:43.494 "state": "configuring", 00:14:43.494 "raid_level": "raid5f", 00:14:43.494 "superblock": true, 00:14:43.494 "num_base_bdevs": 3, 00:14:43.494 "num_base_bdevs_discovered": 1, 00:14:43.494 "num_base_bdevs_operational": 3, 00:14:43.494 "base_bdevs_list": [ 00:14:43.494 { 00:14:43.494 "name": "BaseBdev1", 00:14:43.494 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:43.494 "is_configured": true, 00:14:43.494 "data_offset": 2048, 00:14:43.494 "data_size": 63488 00:14:43.494 }, 00:14:43.494 { 00:14:43.494 "name": "BaseBdev2", 00:14:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.494 "is_configured": false, 00:14:43.494 "data_offset": 0, 00:14:43.494 "data_size": 0 00:14:43.494 }, 00:14:43.494 { 00:14:43.494 "name": "BaseBdev3", 00:14:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.494 "is_configured": false, 00:14:43.494 "data_offset": 0, 00:14:43.494 "data_size": 0 00:14:43.494 } 00:14:43.494 ] 00:14:43.494 }' 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.494 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.753 [2024-12-12 19:43:26.577608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.753 BaseBdev2 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.753 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.012 [ 00:14:44.012 { 00:14:44.012 "name": "BaseBdev2", 00:14:44.012 "aliases": [ 00:14:44.012 "d8a83083-dd3d-4ae3-9643-8b504c2af98e" 00:14:44.012 ], 00:14:44.012 "product_name": "Malloc disk", 00:14:44.012 "block_size": 512, 00:14:44.012 "num_blocks": 65536, 00:14:44.012 "uuid": "d8a83083-dd3d-4ae3-9643-8b504c2af98e", 00:14:44.012 "assigned_rate_limits": { 00:14:44.012 "rw_ios_per_sec": 0, 00:14:44.012 "rw_mbytes_per_sec": 0, 00:14:44.012 "r_mbytes_per_sec": 0, 00:14:44.012 "w_mbytes_per_sec": 0 00:14:44.012 }, 00:14:44.012 "claimed": true, 00:14:44.012 "claim_type": "exclusive_write", 00:14:44.012 "zoned": false, 00:14:44.012 "supported_io_types": { 00:14:44.012 "read": true, 00:14:44.012 "write": true, 00:14:44.012 "unmap": true, 00:14:44.012 "flush": true, 00:14:44.012 "reset": true, 00:14:44.012 "nvme_admin": false, 00:14:44.012 "nvme_io": false, 00:14:44.012 "nvme_io_md": false, 00:14:44.012 "write_zeroes": true, 00:14:44.012 "zcopy": true, 00:14:44.012 "get_zone_info": false, 00:14:44.012 "zone_management": false, 00:14:44.012 "zone_append": false, 00:14:44.012 "compare": false, 00:14:44.012 "compare_and_write": false, 00:14:44.012 "abort": true, 00:14:44.012 "seek_hole": false, 00:14:44.012 "seek_data": false, 00:14:44.012 "copy": true, 00:14:44.012 "nvme_iov_md": false 00:14:44.012 }, 00:14:44.012 "memory_domains": [ 00:14:44.012 { 00:14:44.012 "dma_device_id": "system", 00:14:44.012 "dma_device_type": 1 00:14:44.012 }, 00:14:44.012 { 00:14:44.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.012 "dma_device_type": 2 00:14:44.012 } 00:14:44.012 ], 00:14:44.012 "driver_specific": {} 00:14:44.012 } 00:14:44.012 ] 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.012 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.013 "name": "Existed_Raid", 00:14:44.013 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:44.013 "strip_size_kb": 64, 00:14:44.013 "state": "configuring", 00:14:44.013 "raid_level": "raid5f", 00:14:44.013 "superblock": true, 00:14:44.013 "num_base_bdevs": 3, 00:14:44.013 "num_base_bdevs_discovered": 2, 00:14:44.013 "num_base_bdevs_operational": 3, 00:14:44.013 "base_bdevs_list": [ 00:14:44.013 { 00:14:44.013 "name": "BaseBdev1", 00:14:44.013 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:44.013 "is_configured": true, 00:14:44.013 "data_offset": 2048, 00:14:44.013 "data_size": 63488 00:14:44.013 }, 00:14:44.013 { 00:14:44.013 "name": "BaseBdev2", 00:14:44.013 "uuid": "d8a83083-dd3d-4ae3-9643-8b504c2af98e", 00:14:44.013 "is_configured": true, 00:14:44.013 "data_offset": 2048, 00:14:44.013 "data_size": 63488 00:14:44.013 }, 00:14:44.013 { 00:14:44.013 "name": "BaseBdev3", 00:14:44.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.013 "is_configured": false, 00:14:44.013 "data_offset": 0, 00:14:44.013 "data_size": 0 00:14:44.013 } 00:14:44.013 ] 00:14:44.013 }' 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.013 19:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.272 [2024-12-12 19:43:27.098133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.272 [2024-12-12 19:43:27.098492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.272 [2024-12-12 19:43:27.098564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.272 [2024-12-12 19:43:27.098863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.272 BaseBdev3 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.272 [2024-12-12 19:43:27.104062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.272 [2024-12-12 19:43:27.104133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:44.272 [2024-12-12 19:43:27.104347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.272 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 [ 00:14:44.531 { 00:14:44.531 "name": "BaseBdev3", 00:14:44.531 "aliases": [ 00:14:44.531 "36249937-4a01-4012-a205-cb42c57fc3f3" 00:14:44.531 ], 00:14:44.531 "product_name": "Malloc disk", 00:14:44.531 "block_size": 512, 00:14:44.531 "num_blocks": 65536, 00:14:44.531 "uuid": "36249937-4a01-4012-a205-cb42c57fc3f3", 00:14:44.531 "assigned_rate_limits": { 00:14:44.531 "rw_ios_per_sec": 0, 00:14:44.531 "rw_mbytes_per_sec": 0, 00:14:44.532 "r_mbytes_per_sec": 0, 00:14:44.532 "w_mbytes_per_sec": 0 00:14:44.532 }, 00:14:44.532 "claimed": true, 00:14:44.532 "claim_type": "exclusive_write", 00:14:44.532 "zoned": false, 00:14:44.532 "supported_io_types": { 00:14:44.532 "read": true, 00:14:44.532 "write": true, 00:14:44.532 "unmap": true, 00:14:44.532 "flush": true, 00:14:44.532 "reset": true, 00:14:44.532 "nvme_admin": false, 00:14:44.532 "nvme_io": false, 00:14:44.532 "nvme_io_md": false, 00:14:44.532 "write_zeroes": true, 00:14:44.532 "zcopy": true, 00:14:44.532 "get_zone_info": false, 00:14:44.532 "zone_management": false, 00:14:44.532 "zone_append": false, 00:14:44.532 "compare": false, 00:14:44.532 "compare_and_write": false, 00:14:44.532 "abort": true, 00:14:44.532 "seek_hole": false, 00:14:44.532 "seek_data": false, 00:14:44.532 "copy": true, 00:14:44.532 "nvme_iov_md": false 00:14:44.532 }, 00:14:44.532 "memory_domains": [ 00:14:44.532 { 00:14:44.532 "dma_device_id": "system", 00:14:44.532 "dma_device_type": 1 00:14:44.532 }, 00:14:44.532 { 00:14:44.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.532 "dma_device_type": 2 00:14:44.532 } 00:14:44.532 ], 00:14:44.532 "driver_specific": {} 00:14:44.532 } 00:14:44.532 ] 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.532 "name": "Existed_Raid", 00:14:44.532 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:44.532 "strip_size_kb": 64, 00:14:44.532 "state": "online", 00:14:44.532 "raid_level": "raid5f", 00:14:44.532 "superblock": true, 00:14:44.532 "num_base_bdevs": 3, 00:14:44.532 "num_base_bdevs_discovered": 3, 00:14:44.532 "num_base_bdevs_operational": 3, 00:14:44.532 "base_bdevs_list": [ 00:14:44.532 { 00:14:44.532 "name": "BaseBdev1", 00:14:44.532 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:44.532 "is_configured": true, 00:14:44.532 "data_offset": 2048, 00:14:44.532 "data_size": 63488 00:14:44.532 }, 00:14:44.532 { 00:14:44.532 "name": "BaseBdev2", 00:14:44.532 "uuid": "d8a83083-dd3d-4ae3-9643-8b504c2af98e", 00:14:44.532 "is_configured": true, 00:14:44.532 "data_offset": 2048, 00:14:44.532 "data_size": 63488 00:14:44.532 }, 00:14:44.532 { 00:14:44.532 "name": "BaseBdev3", 00:14:44.532 "uuid": "36249937-4a01-4012-a205-cb42c57fc3f3", 00:14:44.532 "is_configured": true, 00:14:44.532 "data_offset": 2048, 00:14:44.532 "data_size": 63488 00:14:44.532 } 00:14:44.532 ] 00:14:44.532 }' 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.532 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.791 [2024-12-12 19:43:27.553463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.791 "name": "Existed_Raid", 00:14:44.791 "aliases": [ 00:14:44.791 "7b7a68a0-af33-468d-85a1-0f81a5977caa" 00:14:44.791 ], 00:14:44.791 "product_name": "Raid Volume", 00:14:44.791 "block_size": 512, 00:14:44.791 "num_blocks": 126976, 00:14:44.791 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:44.791 "assigned_rate_limits": { 00:14:44.791 "rw_ios_per_sec": 0, 00:14:44.791 "rw_mbytes_per_sec": 0, 00:14:44.791 "r_mbytes_per_sec": 0, 00:14:44.791 "w_mbytes_per_sec": 0 00:14:44.791 }, 00:14:44.791 "claimed": false, 00:14:44.791 "zoned": false, 00:14:44.791 "supported_io_types": { 00:14:44.791 "read": true, 00:14:44.791 "write": true, 00:14:44.791 "unmap": false, 00:14:44.791 "flush": false, 00:14:44.791 "reset": true, 00:14:44.791 "nvme_admin": false, 00:14:44.791 "nvme_io": false, 00:14:44.791 "nvme_io_md": false, 00:14:44.791 "write_zeroes": true, 00:14:44.791 "zcopy": false, 00:14:44.791 "get_zone_info": false, 00:14:44.791 "zone_management": false, 00:14:44.791 "zone_append": false, 00:14:44.791 "compare": false, 00:14:44.791 "compare_and_write": false, 00:14:44.791 "abort": false, 00:14:44.791 "seek_hole": false, 00:14:44.791 "seek_data": false, 00:14:44.791 "copy": false, 00:14:44.791 "nvme_iov_md": false 00:14:44.791 }, 00:14:44.791 "driver_specific": { 00:14:44.791 "raid": { 00:14:44.791 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:44.791 "strip_size_kb": 64, 00:14:44.791 "state": "online", 00:14:44.791 "raid_level": "raid5f", 00:14:44.791 "superblock": true, 00:14:44.791 "num_base_bdevs": 3, 00:14:44.791 "num_base_bdevs_discovered": 3, 00:14:44.791 "num_base_bdevs_operational": 3, 00:14:44.791 "base_bdevs_list": [ 00:14:44.791 { 00:14:44.791 "name": "BaseBdev1", 00:14:44.791 "uuid": "eddb1e6c-9b88-4cc9-b61f-1bc29d2dbbf0", 00:14:44.791 "is_configured": true, 00:14:44.791 "data_offset": 2048, 00:14:44.791 "data_size": 63488 00:14:44.791 }, 00:14:44.791 { 00:14:44.791 "name": "BaseBdev2", 00:14:44.791 "uuid": "d8a83083-dd3d-4ae3-9643-8b504c2af98e", 00:14:44.791 "is_configured": true, 00:14:44.791 "data_offset": 2048, 00:14:44.791 "data_size": 63488 00:14:44.791 }, 00:14:44.791 { 00:14:44.791 "name": "BaseBdev3", 00:14:44.791 "uuid": "36249937-4a01-4012-a205-cb42c57fc3f3", 00:14:44.791 "is_configured": true, 00:14:44.791 "data_offset": 2048, 00:14:44.791 "data_size": 63488 00:14:44.791 } 00:14:44.791 ] 00:14:44.791 } 00:14:44.791 } 00:14:44.791 }' 00:14:44.791 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:45.051 BaseBdev2 00:14:45.051 BaseBdev3' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.051 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.051 [2024-12-12 19:43:27.812846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.310 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.311 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.311 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.311 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.311 "name": "Existed_Raid", 00:14:45.311 "uuid": "7b7a68a0-af33-468d-85a1-0f81a5977caa", 00:14:45.311 "strip_size_kb": 64, 00:14:45.311 "state": "online", 00:14:45.311 "raid_level": "raid5f", 00:14:45.311 "superblock": true, 00:14:45.311 "num_base_bdevs": 3, 00:14:45.311 "num_base_bdevs_discovered": 2, 00:14:45.311 "num_base_bdevs_operational": 2, 00:14:45.311 "base_bdevs_list": [ 00:14:45.311 { 00:14:45.311 "name": null, 00:14:45.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.311 "is_configured": false, 00:14:45.311 "data_offset": 0, 00:14:45.311 "data_size": 63488 00:14:45.311 }, 00:14:45.311 { 00:14:45.311 "name": "BaseBdev2", 00:14:45.311 "uuid": "d8a83083-dd3d-4ae3-9643-8b504c2af98e", 00:14:45.311 "is_configured": true, 00:14:45.311 "data_offset": 2048, 00:14:45.311 "data_size": 63488 00:14:45.311 }, 00:14:45.311 { 00:14:45.311 "name": "BaseBdev3", 00:14:45.311 "uuid": "36249937-4a01-4012-a205-cb42c57fc3f3", 00:14:45.311 "is_configured": true, 00:14:45.311 "data_offset": 2048, 00:14:45.311 "data_size": 63488 00:14:45.311 } 00:14:45.311 ] 00:14:45.311 }' 00:14:45.311 19:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.311 19:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.571 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.831 [2024-12-12 19:43:28.425765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.831 [2024-12-12 19:43:28.425912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.831 [2024-12-12 19:43:28.518138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.831 [2024-12-12 19:43:28.574076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.831 [2024-12-12 19:43:28.574166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.831 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.091 BaseBdev2 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.091 [ 00:14:46.091 { 00:14:46.091 "name": "BaseBdev2", 00:14:46.091 "aliases": [ 00:14:46.091 "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6" 00:14:46.091 ], 00:14:46.091 "product_name": "Malloc disk", 00:14:46.091 "block_size": 512, 00:14:46.091 "num_blocks": 65536, 00:14:46.091 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:46.091 "assigned_rate_limits": { 00:14:46.091 "rw_ios_per_sec": 0, 00:14:46.091 "rw_mbytes_per_sec": 0, 00:14:46.091 "r_mbytes_per_sec": 0, 00:14:46.091 "w_mbytes_per_sec": 0 00:14:46.091 }, 00:14:46.091 "claimed": false, 00:14:46.091 "zoned": false, 00:14:46.091 "supported_io_types": { 00:14:46.091 "read": true, 00:14:46.091 "write": true, 00:14:46.091 "unmap": true, 00:14:46.091 "flush": true, 00:14:46.091 "reset": true, 00:14:46.091 "nvme_admin": false, 00:14:46.091 "nvme_io": false, 00:14:46.091 "nvme_io_md": false, 00:14:46.091 "write_zeroes": true, 00:14:46.091 "zcopy": true, 00:14:46.091 "get_zone_info": false, 00:14:46.091 "zone_management": false, 00:14:46.091 "zone_append": false, 00:14:46.091 "compare": false, 00:14:46.091 "compare_and_write": false, 00:14:46.091 "abort": true, 00:14:46.091 "seek_hole": false, 00:14:46.091 "seek_data": false, 00:14:46.091 "copy": true, 00:14:46.091 "nvme_iov_md": false 00:14:46.091 }, 00:14:46.091 "memory_domains": [ 00:14:46.091 { 00:14:46.091 "dma_device_id": "system", 00:14:46.091 "dma_device_type": 1 00:14:46.091 }, 00:14:46.091 { 00:14:46.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.091 "dma_device_type": 2 00:14:46.091 } 00:14:46.091 ], 00:14:46.091 "driver_specific": {} 00:14:46.091 } 00:14:46.091 ] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.091 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.092 BaseBdev3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.092 [ 00:14:46.092 { 00:14:46.092 "name": "BaseBdev3", 00:14:46.092 "aliases": [ 00:14:46.092 "d2352f50-beb8-462e-a431-63ddfe8a5540" 00:14:46.092 ], 00:14:46.092 "product_name": "Malloc disk", 00:14:46.092 "block_size": 512, 00:14:46.092 "num_blocks": 65536, 00:14:46.092 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:46.092 "assigned_rate_limits": { 00:14:46.092 "rw_ios_per_sec": 0, 00:14:46.092 "rw_mbytes_per_sec": 0, 00:14:46.092 "r_mbytes_per_sec": 0, 00:14:46.092 "w_mbytes_per_sec": 0 00:14:46.092 }, 00:14:46.092 "claimed": false, 00:14:46.092 "zoned": false, 00:14:46.092 "supported_io_types": { 00:14:46.092 "read": true, 00:14:46.092 "write": true, 00:14:46.092 "unmap": true, 00:14:46.092 "flush": true, 00:14:46.092 "reset": true, 00:14:46.092 "nvme_admin": false, 00:14:46.092 "nvme_io": false, 00:14:46.092 "nvme_io_md": false, 00:14:46.092 "write_zeroes": true, 00:14:46.092 "zcopy": true, 00:14:46.092 "get_zone_info": false, 00:14:46.092 "zone_management": false, 00:14:46.092 "zone_append": false, 00:14:46.092 "compare": false, 00:14:46.092 "compare_and_write": false, 00:14:46.092 "abort": true, 00:14:46.092 "seek_hole": false, 00:14:46.092 "seek_data": false, 00:14:46.092 "copy": true, 00:14:46.092 "nvme_iov_md": false 00:14:46.092 }, 00:14:46.092 "memory_domains": [ 00:14:46.092 { 00:14:46.092 "dma_device_id": "system", 00:14:46.092 "dma_device_type": 1 00:14:46.092 }, 00:14:46.092 { 00:14:46.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.092 "dma_device_type": 2 00:14:46.092 } 00:14:46.092 ], 00:14:46.092 "driver_specific": {} 00:14:46.092 } 00:14:46.092 ] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.092 [2024-12-12 19:43:28.885098] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.092 [2024-12-12 19:43:28.885216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.092 [2024-12-12 19:43:28.885254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.092 [2024-12-12 19:43:28.886935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.092 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.351 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.351 "name": "Existed_Raid", 00:14:46.351 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:46.351 "strip_size_kb": 64, 00:14:46.351 "state": "configuring", 00:14:46.351 "raid_level": "raid5f", 00:14:46.351 "superblock": true, 00:14:46.351 "num_base_bdevs": 3, 00:14:46.351 "num_base_bdevs_discovered": 2, 00:14:46.351 "num_base_bdevs_operational": 3, 00:14:46.351 "base_bdevs_list": [ 00:14:46.351 { 00:14:46.351 "name": "BaseBdev1", 00:14:46.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.351 "is_configured": false, 00:14:46.351 "data_offset": 0, 00:14:46.351 "data_size": 0 00:14:46.351 }, 00:14:46.351 { 00:14:46.351 "name": "BaseBdev2", 00:14:46.351 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:46.351 "is_configured": true, 00:14:46.351 "data_offset": 2048, 00:14:46.351 "data_size": 63488 00:14:46.351 }, 00:14:46.351 { 00:14:46.351 "name": "BaseBdev3", 00:14:46.351 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:46.351 "is_configured": true, 00:14:46.351 "data_offset": 2048, 00:14:46.351 "data_size": 63488 00:14:46.351 } 00:14:46.351 ] 00:14:46.351 }' 00:14:46.351 19:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.351 19:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.611 [2024-12-12 19:43:29.300392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.611 "name": "Existed_Raid", 00:14:46.611 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:46.611 "strip_size_kb": 64, 00:14:46.611 "state": "configuring", 00:14:46.611 "raid_level": "raid5f", 00:14:46.611 "superblock": true, 00:14:46.611 "num_base_bdevs": 3, 00:14:46.611 "num_base_bdevs_discovered": 1, 00:14:46.611 "num_base_bdevs_operational": 3, 00:14:46.611 "base_bdevs_list": [ 00:14:46.611 { 00:14:46.611 "name": "BaseBdev1", 00:14:46.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.611 "is_configured": false, 00:14:46.611 "data_offset": 0, 00:14:46.611 "data_size": 0 00:14:46.611 }, 00:14:46.611 { 00:14:46.611 "name": null, 00:14:46.611 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:46.611 "is_configured": false, 00:14:46.611 "data_offset": 0, 00:14:46.611 "data_size": 63488 00:14:46.611 }, 00:14:46.611 { 00:14:46.611 "name": "BaseBdev3", 00:14:46.611 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:46.611 "is_configured": true, 00:14:46.611 "data_offset": 2048, 00:14:46.611 "data_size": 63488 00:14:46.611 } 00:14:46.611 ] 00:14:46.611 }' 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.611 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 [2024-12-12 19:43:29.858305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.181 BaseBdev1 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 [ 00:14:47.181 { 00:14:47.181 "name": "BaseBdev1", 00:14:47.181 "aliases": [ 00:14:47.181 "e0be0f56-daa7-497e-a3ad-a29633bcfdf1" 00:14:47.181 ], 00:14:47.181 "product_name": "Malloc disk", 00:14:47.181 "block_size": 512, 00:14:47.181 "num_blocks": 65536, 00:14:47.181 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:47.181 "assigned_rate_limits": { 00:14:47.181 "rw_ios_per_sec": 0, 00:14:47.181 "rw_mbytes_per_sec": 0, 00:14:47.181 "r_mbytes_per_sec": 0, 00:14:47.181 "w_mbytes_per_sec": 0 00:14:47.181 }, 00:14:47.181 "claimed": true, 00:14:47.181 "claim_type": "exclusive_write", 00:14:47.181 "zoned": false, 00:14:47.181 "supported_io_types": { 00:14:47.181 "read": true, 00:14:47.181 "write": true, 00:14:47.181 "unmap": true, 00:14:47.181 "flush": true, 00:14:47.181 "reset": true, 00:14:47.181 "nvme_admin": false, 00:14:47.181 "nvme_io": false, 00:14:47.181 "nvme_io_md": false, 00:14:47.181 "write_zeroes": true, 00:14:47.181 "zcopy": true, 00:14:47.181 "get_zone_info": false, 00:14:47.181 "zone_management": false, 00:14:47.181 "zone_append": false, 00:14:47.181 "compare": false, 00:14:47.181 "compare_and_write": false, 00:14:47.181 "abort": true, 00:14:47.181 "seek_hole": false, 00:14:47.181 "seek_data": false, 00:14:47.181 "copy": true, 00:14:47.181 "nvme_iov_md": false 00:14:47.181 }, 00:14:47.181 "memory_domains": [ 00:14:47.181 { 00:14:47.181 "dma_device_id": "system", 00:14:47.181 "dma_device_type": 1 00:14:47.181 }, 00:14:47.181 { 00:14:47.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.181 "dma_device_type": 2 00:14:47.181 } 00:14:47.181 ], 00:14:47.181 "driver_specific": {} 00:14:47.181 } 00:14:47.181 ] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.181 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.182 "name": "Existed_Raid", 00:14:47.182 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:47.182 "strip_size_kb": 64, 00:14:47.182 "state": "configuring", 00:14:47.182 "raid_level": "raid5f", 00:14:47.182 "superblock": true, 00:14:47.182 "num_base_bdevs": 3, 00:14:47.182 "num_base_bdevs_discovered": 2, 00:14:47.182 "num_base_bdevs_operational": 3, 00:14:47.182 "base_bdevs_list": [ 00:14:47.182 { 00:14:47.182 "name": "BaseBdev1", 00:14:47.182 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:47.182 "is_configured": true, 00:14:47.182 "data_offset": 2048, 00:14:47.182 "data_size": 63488 00:14:47.182 }, 00:14:47.182 { 00:14:47.182 "name": null, 00:14:47.182 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:47.182 "is_configured": false, 00:14:47.182 "data_offset": 0, 00:14:47.182 "data_size": 63488 00:14:47.182 }, 00:14:47.182 { 00:14:47.182 "name": "BaseBdev3", 00:14:47.182 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:47.182 "is_configured": true, 00:14:47.182 "data_offset": 2048, 00:14:47.182 "data_size": 63488 00:14:47.182 } 00:14:47.182 ] 00:14:47.182 }' 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.182 19:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 [2024-12-12 19:43:30.385594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.751 "name": "Existed_Raid", 00:14:47.751 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:47.751 "strip_size_kb": 64, 00:14:47.751 "state": "configuring", 00:14:47.751 "raid_level": "raid5f", 00:14:47.751 "superblock": true, 00:14:47.751 "num_base_bdevs": 3, 00:14:47.751 "num_base_bdevs_discovered": 1, 00:14:47.751 "num_base_bdevs_operational": 3, 00:14:47.751 "base_bdevs_list": [ 00:14:47.751 { 00:14:47.751 "name": "BaseBdev1", 00:14:47.751 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:47.751 "is_configured": true, 00:14:47.751 "data_offset": 2048, 00:14:47.751 "data_size": 63488 00:14:47.751 }, 00:14:47.751 { 00:14:47.751 "name": null, 00:14:47.751 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:47.751 "is_configured": false, 00:14:47.751 "data_offset": 0, 00:14:47.751 "data_size": 63488 00:14:47.751 }, 00:14:47.751 { 00:14:47.751 "name": null, 00:14:47.751 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:47.751 "is_configured": false, 00:14:47.751 "data_offset": 0, 00:14:47.751 "data_size": 63488 00:14:47.751 } 00:14:47.751 ] 00:14:47.751 }' 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.751 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.010 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.010 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.010 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.010 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.010 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.270 [2024-12-12 19:43:30.868824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.270 "name": "Existed_Raid", 00:14:48.270 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:48.270 "strip_size_kb": 64, 00:14:48.270 "state": "configuring", 00:14:48.270 "raid_level": "raid5f", 00:14:48.270 "superblock": true, 00:14:48.270 "num_base_bdevs": 3, 00:14:48.270 "num_base_bdevs_discovered": 2, 00:14:48.270 "num_base_bdevs_operational": 3, 00:14:48.270 "base_bdevs_list": [ 00:14:48.270 { 00:14:48.270 "name": "BaseBdev1", 00:14:48.270 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:48.270 "is_configured": true, 00:14:48.270 "data_offset": 2048, 00:14:48.270 "data_size": 63488 00:14:48.270 }, 00:14:48.270 { 00:14:48.270 "name": null, 00:14:48.270 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:48.270 "is_configured": false, 00:14:48.270 "data_offset": 0, 00:14:48.270 "data_size": 63488 00:14:48.270 }, 00:14:48.270 { 00:14:48.270 "name": "BaseBdev3", 00:14:48.270 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:48.270 "is_configured": true, 00:14:48.270 "data_offset": 2048, 00:14:48.270 "data_size": 63488 00:14:48.270 } 00:14:48.270 ] 00:14:48.270 }' 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.270 19:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.530 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.530 [2024-12-12 19:43:31.336226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.790 "name": "Existed_Raid", 00:14:48.790 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:48.790 "strip_size_kb": 64, 00:14:48.790 "state": "configuring", 00:14:48.790 "raid_level": "raid5f", 00:14:48.790 "superblock": true, 00:14:48.790 "num_base_bdevs": 3, 00:14:48.790 "num_base_bdevs_discovered": 1, 00:14:48.790 "num_base_bdevs_operational": 3, 00:14:48.790 "base_bdevs_list": [ 00:14:48.790 { 00:14:48.790 "name": null, 00:14:48.790 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:48.790 "is_configured": false, 00:14:48.790 "data_offset": 0, 00:14:48.790 "data_size": 63488 00:14:48.790 }, 00:14:48.790 { 00:14:48.790 "name": null, 00:14:48.790 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:48.790 "is_configured": false, 00:14:48.790 "data_offset": 0, 00:14:48.790 "data_size": 63488 00:14:48.790 }, 00:14:48.790 { 00:14:48.790 "name": "BaseBdev3", 00:14:48.790 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:48.790 "is_configured": true, 00:14:48.790 "data_offset": 2048, 00:14:48.790 "data_size": 63488 00:14:48.790 } 00:14:48.790 ] 00:14:48.790 }' 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.790 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.049 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:49.049 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.049 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.049 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.049 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.308 [2024-12-12 19:43:31.912469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.308 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.309 "name": "Existed_Raid", 00:14:49.309 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:49.309 "strip_size_kb": 64, 00:14:49.309 "state": "configuring", 00:14:49.309 "raid_level": "raid5f", 00:14:49.309 "superblock": true, 00:14:49.309 "num_base_bdevs": 3, 00:14:49.309 "num_base_bdevs_discovered": 2, 00:14:49.309 "num_base_bdevs_operational": 3, 00:14:49.309 "base_bdevs_list": [ 00:14:49.309 { 00:14:49.309 "name": null, 00:14:49.309 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:49.309 "is_configured": false, 00:14:49.309 "data_offset": 0, 00:14:49.309 "data_size": 63488 00:14:49.309 }, 00:14:49.309 { 00:14:49.309 "name": "BaseBdev2", 00:14:49.309 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:49.309 "is_configured": true, 00:14:49.309 "data_offset": 2048, 00:14:49.309 "data_size": 63488 00:14:49.309 }, 00:14:49.309 { 00:14:49.309 "name": "BaseBdev3", 00:14:49.309 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:49.309 "is_configured": true, 00:14:49.309 "data_offset": 2048, 00:14:49.309 "data_size": 63488 00:14:49.309 } 00:14:49.309 ] 00:14:49.309 }' 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.309 19:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.569 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.569 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.569 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.569 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.569 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0be0f56-daa7-497e-a3ad-a29633bcfdf1 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 [2024-12-12 19:43:32.526973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.828 [2024-12-12 19:43:32.527380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:49.828 [2024-12-12 19:43:32.527461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.828 [2024-12-12 19:43:32.527830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:49.828 NewBaseBdev 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.828 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 [2024-12-12 19:43:32.533403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:49.829 [2024-12-12 19:43:32.533429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:49.829 [2024-12-12 19:43:32.533631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 [ 00:14:49.829 { 00:14:49.829 "name": "NewBaseBdev", 00:14:49.829 "aliases": [ 00:14:49.829 "e0be0f56-daa7-497e-a3ad-a29633bcfdf1" 00:14:49.829 ], 00:14:49.829 "product_name": "Malloc disk", 00:14:49.829 "block_size": 512, 00:14:49.829 "num_blocks": 65536, 00:14:49.829 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:49.829 "assigned_rate_limits": { 00:14:49.829 "rw_ios_per_sec": 0, 00:14:49.829 "rw_mbytes_per_sec": 0, 00:14:49.829 "r_mbytes_per_sec": 0, 00:14:49.829 "w_mbytes_per_sec": 0 00:14:49.829 }, 00:14:49.829 "claimed": true, 00:14:49.829 "claim_type": "exclusive_write", 00:14:49.829 "zoned": false, 00:14:49.829 "supported_io_types": { 00:14:49.829 "read": true, 00:14:49.829 "write": true, 00:14:49.829 "unmap": true, 00:14:49.829 "flush": true, 00:14:49.829 "reset": true, 00:14:49.829 "nvme_admin": false, 00:14:49.829 "nvme_io": false, 00:14:49.829 "nvme_io_md": false, 00:14:49.829 "write_zeroes": true, 00:14:49.829 "zcopy": true, 00:14:49.829 "get_zone_info": false, 00:14:49.829 "zone_management": false, 00:14:49.829 "zone_append": false, 00:14:49.829 "compare": false, 00:14:49.829 "compare_and_write": false, 00:14:49.829 "abort": true, 00:14:49.829 "seek_hole": false, 00:14:49.829 "seek_data": false, 00:14:49.829 "copy": true, 00:14:49.829 "nvme_iov_md": false 00:14:49.829 }, 00:14:49.829 "memory_domains": [ 00:14:49.829 { 00:14:49.829 "dma_device_id": "system", 00:14:49.829 "dma_device_type": 1 00:14:49.829 }, 00:14:49.829 { 00:14:49.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.829 "dma_device_type": 2 00:14:49.829 } 00:14:49.829 ], 00:14:49.829 "driver_specific": {} 00:14:49.829 } 00:14:49.829 ] 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.829 "name": "Existed_Raid", 00:14:49.829 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:49.829 "strip_size_kb": 64, 00:14:49.829 "state": "online", 00:14:49.829 "raid_level": "raid5f", 00:14:49.829 "superblock": true, 00:14:49.829 "num_base_bdevs": 3, 00:14:49.829 "num_base_bdevs_discovered": 3, 00:14:49.829 "num_base_bdevs_operational": 3, 00:14:49.829 "base_bdevs_list": [ 00:14:49.829 { 00:14:49.829 "name": "NewBaseBdev", 00:14:49.829 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:49.829 "is_configured": true, 00:14:49.829 "data_offset": 2048, 00:14:49.829 "data_size": 63488 00:14:49.829 }, 00:14:49.829 { 00:14:49.829 "name": "BaseBdev2", 00:14:49.829 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:49.829 "is_configured": true, 00:14:49.829 "data_offset": 2048, 00:14:49.829 "data_size": 63488 00:14:49.829 }, 00:14:49.829 { 00:14:49.829 "name": "BaseBdev3", 00:14:49.829 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:49.829 "is_configured": true, 00:14:49.829 "data_offset": 2048, 00:14:49.829 "data_size": 63488 00:14:49.829 } 00:14:49.829 ] 00:14:49.829 }' 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.829 19:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.398 19:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.398 [2024-12-12 19:43:33.012577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.398 "name": "Existed_Raid", 00:14:50.398 "aliases": [ 00:14:50.398 "6d1dbc6b-a439-4d67-a614-6468d1991e2a" 00:14:50.398 ], 00:14:50.398 "product_name": "Raid Volume", 00:14:50.398 "block_size": 512, 00:14:50.398 "num_blocks": 126976, 00:14:50.398 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:50.398 "assigned_rate_limits": { 00:14:50.398 "rw_ios_per_sec": 0, 00:14:50.398 "rw_mbytes_per_sec": 0, 00:14:50.398 "r_mbytes_per_sec": 0, 00:14:50.398 "w_mbytes_per_sec": 0 00:14:50.398 }, 00:14:50.398 "claimed": false, 00:14:50.398 "zoned": false, 00:14:50.398 "supported_io_types": { 00:14:50.398 "read": true, 00:14:50.398 "write": true, 00:14:50.398 "unmap": false, 00:14:50.398 "flush": false, 00:14:50.398 "reset": true, 00:14:50.398 "nvme_admin": false, 00:14:50.398 "nvme_io": false, 00:14:50.398 "nvme_io_md": false, 00:14:50.398 "write_zeroes": true, 00:14:50.398 "zcopy": false, 00:14:50.398 "get_zone_info": false, 00:14:50.398 "zone_management": false, 00:14:50.398 "zone_append": false, 00:14:50.398 "compare": false, 00:14:50.398 "compare_and_write": false, 00:14:50.398 "abort": false, 00:14:50.398 "seek_hole": false, 00:14:50.398 "seek_data": false, 00:14:50.398 "copy": false, 00:14:50.398 "nvme_iov_md": false 00:14:50.398 }, 00:14:50.398 "driver_specific": { 00:14:50.398 "raid": { 00:14:50.398 "uuid": "6d1dbc6b-a439-4d67-a614-6468d1991e2a", 00:14:50.398 "strip_size_kb": 64, 00:14:50.398 "state": "online", 00:14:50.398 "raid_level": "raid5f", 00:14:50.398 "superblock": true, 00:14:50.398 "num_base_bdevs": 3, 00:14:50.398 "num_base_bdevs_discovered": 3, 00:14:50.398 "num_base_bdevs_operational": 3, 00:14:50.398 "base_bdevs_list": [ 00:14:50.398 { 00:14:50.398 "name": "NewBaseBdev", 00:14:50.398 "uuid": "e0be0f56-daa7-497e-a3ad-a29633bcfdf1", 00:14:50.398 "is_configured": true, 00:14:50.398 "data_offset": 2048, 00:14:50.398 "data_size": 63488 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "name": "BaseBdev2", 00:14:50.398 "uuid": "e6e5bdce-bab3-4946-ad59-9b8271d5c7b6", 00:14:50.398 "is_configured": true, 00:14:50.398 "data_offset": 2048, 00:14:50.398 "data_size": 63488 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "name": "BaseBdev3", 00:14:50.398 "uuid": "d2352f50-beb8-462e-a431-63ddfe8a5540", 00:14:50.398 "is_configured": true, 00:14:50.398 "data_offset": 2048, 00:14:50.398 "data_size": 63488 00:14:50.398 } 00:14:50.398 ] 00:14:50.398 } 00:14:50.398 } 00:14:50.398 }' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:50.398 BaseBdev2 00:14:50.398 BaseBdev3' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.398 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.399 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.658 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.658 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.658 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.658 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.659 [2024-12-12 19:43:33.295850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.659 [2024-12-12 19:43:33.295955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.659 [2024-12-12 19:43:33.296099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.659 [2024-12-12 19:43:33.296494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.659 [2024-12-12 19:43:33.296609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82189 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82189 ']' 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82189 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82189 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.659 killing process with pid 82189 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82189' 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82189 00:14:50.659 [2024-12-12 19:43:33.345246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.659 19:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82189 00:14:50.919 [2024-12-12 19:43:33.668357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.301 19:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:52.301 ************************************ 00:14:52.301 END TEST raid5f_state_function_test_sb 00:14:52.301 ************************************ 00:14:52.301 00:14:52.301 real 0m10.760s 00:14:52.301 user 0m16.963s 00:14:52.301 sys 0m1.972s 00:14:52.301 19:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.301 19:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.301 19:43:34 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:52.301 19:43:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:52.301 19:43:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.301 19:43:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.301 ************************************ 00:14:52.301 START TEST raid5f_superblock_test 00:14:52.301 ************************************ 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82810 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82810 00:14:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82810 ']' 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.301 19:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.301 [2024-12-12 19:43:35.043398] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:52.301 [2024-12-12 19:43:35.043580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82810 ] 00:14:52.560 [2024-12-12 19:43:35.193391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.560 [2024-12-12 19:43:35.330373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.820 [2024-12-12 19:43:35.564983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.820 [2024-12-12 19:43:35.565131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.079 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 malloc1 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 [2024-12-12 19:43:35.942648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.339 [2024-12-12 19:43:35.942769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.339 [2024-12-12 19:43:35.942819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:53.339 [2024-12-12 19:43:35.942856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.339 [2024-12-12 19:43:35.945387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.339 [2024-12-12 19:43:35.945485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.339 pt1 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 malloc2 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.339 19:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 [2024-12-12 19:43:36.004107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.339 [2024-12-12 19:43:36.004196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.339 [2024-12-12 19:43:36.004225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:53.339 [2024-12-12 19:43:36.004236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.339 [2024-12-12 19:43:36.006946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.339 [2024-12-12 19:43:36.006994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.339 pt2 00:14:53.339 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.339 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:53.339 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.339 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:53.339 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.340 malloc3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.340 [2024-12-12 19:43:36.078937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:53.340 [2024-12-12 19:43:36.079070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.340 [2024-12-12 19:43:36.079117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:53.340 [2024-12-12 19:43:36.079158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.340 [2024-12-12 19:43:36.081664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.340 [2024-12-12 19:43:36.081744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:53.340 pt3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.340 [2024-12-12 19:43:36.090958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.340 [2024-12-12 19:43:36.093114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.340 [2024-12-12 19:43:36.093237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:53.340 [2024-12-12 19:43:36.093511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:53.340 [2024-12-12 19:43:36.093597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:53.340 [2024-12-12 19:43:36.093895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:53.340 [2024-12-12 19:43:36.099944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:53.340 [2024-12-12 19:43:36.100004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:53.340 [2024-12-12 19:43:36.100297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.340 "name": "raid_bdev1", 00:14:53.340 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:53.340 "strip_size_kb": 64, 00:14:53.340 "state": "online", 00:14:53.340 "raid_level": "raid5f", 00:14:53.340 "superblock": true, 00:14:53.340 "num_base_bdevs": 3, 00:14:53.340 "num_base_bdevs_discovered": 3, 00:14:53.340 "num_base_bdevs_operational": 3, 00:14:53.340 "base_bdevs_list": [ 00:14:53.340 { 00:14:53.340 "name": "pt1", 00:14:53.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.340 "is_configured": true, 00:14:53.340 "data_offset": 2048, 00:14:53.340 "data_size": 63488 00:14:53.340 }, 00:14:53.340 { 00:14:53.340 "name": "pt2", 00:14:53.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.340 "is_configured": true, 00:14:53.340 "data_offset": 2048, 00:14:53.340 "data_size": 63488 00:14:53.340 }, 00:14:53.340 { 00:14:53.340 "name": "pt3", 00:14:53.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.340 "is_configured": true, 00:14:53.340 "data_offset": 2048, 00:14:53.340 "data_size": 63488 00:14:53.340 } 00:14:53.340 ] 00:14:53.340 }' 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.340 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.909 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.910 [2024-12-12 19:43:36.611226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.910 "name": "raid_bdev1", 00:14:53.910 "aliases": [ 00:14:53.910 "c214e182-b784-41cb-bc50-b27090834f40" 00:14:53.910 ], 00:14:53.910 "product_name": "Raid Volume", 00:14:53.910 "block_size": 512, 00:14:53.910 "num_blocks": 126976, 00:14:53.910 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:53.910 "assigned_rate_limits": { 00:14:53.910 "rw_ios_per_sec": 0, 00:14:53.910 "rw_mbytes_per_sec": 0, 00:14:53.910 "r_mbytes_per_sec": 0, 00:14:53.910 "w_mbytes_per_sec": 0 00:14:53.910 }, 00:14:53.910 "claimed": false, 00:14:53.910 "zoned": false, 00:14:53.910 "supported_io_types": { 00:14:53.910 "read": true, 00:14:53.910 "write": true, 00:14:53.910 "unmap": false, 00:14:53.910 "flush": false, 00:14:53.910 "reset": true, 00:14:53.910 "nvme_admin": false, 00:14:53.910 "nvme_io": false, 00:14:53.910 "nvme_io_md": false, 00:14:53.910 "write_zeroes": true, 00:14:53.910 "zcopy": false, 00:14:53.910 "get_zone_info": false, 00:14:53.910 "zone_management": false, 00:14:53.910 "zone_append": false, 00:14:53.910 "compare": false, 00:14:53.910 "compare_and_write": false, 00:14:53.910 "abort": false, 00:14:53.910 "seek_hole": false, 00:14:53.910 "seek_data": false, 00:14:53.910 "copy": false, 00:14:53.910 "nvme_iov_md": false 00:14:53.910 }, 00:14:53.910 "driver_specific": { 00:14:53.910 "raid": { 00:14:53.910 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:53.910 "strip_size_kb": 64, 00:14:53.910 "state": "online", 00:14:53.910 "raid_level": "raid5f", 00:14:53.910 "superblock": true, 00:14:53.910 "num_base_bdevs": 3, 00:14:53.910 "num_base_bdevs_discovered": 3, 00:14:53.910 "num_base_bdevs_operational": 3, 00:14:53.910 "base_bdevs_list": [ 00:14:53.910 { 00:14:53.910 "name": "pt1", 00:14:53.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.910 "is_configured": true, 00:14:53.910 "data_offset": 2048, 00:14:53.910 "data_size": 63488 00:14:53.910 }, 00:14:53.910 { 00:14:53.910 "name": "pt2", 00:14:53.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.910 "is_configured": true, 00:14:53.910 "data_offset": 2048, 00:14:53.910 "data_size": 63488 00:14:53.910 }, 00:14:53.910 { 00:14:53.910 "name": "pt3", 00:14:53.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.910 "is_configured": true, 00:14:53.910 "data_offset": 2048, 00:14:53.910 "data_size": 63488 00:14:53.910 } 00:14:53.910 ] 00:14:53.910 } 00:14:53.910 } 00:14:53.910 }' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.910 pt2 00:14:53.910 pt3' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.910 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 [2024-12-12 19:43:36.866710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c214e182-b784-41cb-bc50-b27090834f40 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c214e182-b784-41cb-bc50-b27090834f40 ']' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 [2024-12-12 19:43:36.914457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.170 [2024-12-12 19:43:36.914585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.170 [2024-12-12 19:43:36.914729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.170 [2024-12-12 19:43:36.914830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.170 [2024-12-12 19:43:36.914842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.171 19:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.171 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.171 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:54.171 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.171 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:54.171 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.431 [2024-12-12 19:43:37.070507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:54.431 [2024-12-12 19:43:37.072836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:54.431 [2024-12-12 19:43:37.072922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:54.431 [2024-12-12 19:43:37.073063] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:54.431 [2024-12-12 19:43:37.073143] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:54.431 [2024-12-12 19:43:37.073170] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:54.431 [2024-12-12 19:43:37.073192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.431 [2024-12-12 19:43:37.073205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:54.431 request: 00:14:54.431 { 00:14:54.431 "name": "raid_bdev1", 00:14:54.431 "raid_level": "raid5f", 00:14:54.431 "base_bdevs": [ 00:14:54.431 "malloc1", 00:14:54.431 "malloc2", 00:14:54.431 "malloc3" 00:14:54.431 ], 00:14:54.431 "strip_size_kb": 64, 00:14:54.431 "superblock": false, 00:14:54.431 "method": "bdev_raid_create", 00:14:54.431 "req_id": 1 00:14:54.431 } 00:14:54.431 Got JSON-RPC error response 00:14:54.431 response: 00:14:54.431 { 00:14:54.431 "code": -17, 00:14:54.431 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:54.431 } 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.431 [2024-12-12 19:43:37.138422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.431 [2024-12-12 19:43:37.138589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.431 [2024-12-12 19:43:37.138641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:54.431 [2024-12-12 19:43:37.138704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.431 [2024-12-12 19:43:37.141441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.431 [2024-12-12 19:43:37.141528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.431 [2024-12-12 19:43:37.141712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:54.431 [2024-12-12 19:43:37.141836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.431 pt1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.431 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.431 "name": "raid_bdev1", 00:14:54.431 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:54.431 "strip_size_kb": 64, 00:14:54.431 "state": "configuring", 00:14:54.432 "raid_level": "raid5f", 00:14:54.432 "superblock": true, 00:14:54.432 "num_base_bdevs": 3, 00:14:54.432 "num_base_bdevs_discovered": 1, 00:14:54.432 "num_base_bdevs_operational": 3, 00:14:54.432 "base_bdevs_list": [ 00:14:54.432 { 00:14:54.432 "name": "pt1", 00:14:54.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.432 "is_configured": true, 00:14:54.432 "data_offset": 2048, 00:14:54.432 "data_size": 63488 00:14:54.432 }, 00:14:54.432 { 00:14:54.432 "name": null, 00:14:54.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.432 "is_configured": false, 00:14:54.432 "data_offset": 2048, 00:14:54.432 "data_size": 63488 00:14:54.432 }, 00:14:54.432 { 00:14:54.432 "name": null, 00:14:54.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.432 "is_configured": false, 00:14:54.432 "data_offset": 2048, 00:14:54.432 "data_size": 63488 00:14:54.432 } 00:14:54.432 ] 00:14:54.432 }' 00:14:54.432 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.432 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.001 [2024-12-12 19:43:37.569756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.001 [2024-12-12 19:43:37.569848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.001 [2024-12-12 19:43:37.569881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:55.001 [2024-12-12 19:43:37.569893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.001 [2024-12-12 19:43:37.570486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.001 [2024-12-12 19:43:37.570524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.001 [2024-12-12 19:43:37.570691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.001 [2024-12-12 19:43:37.570740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.001 pt2 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.001 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.002 [2024-12-12 19:43:37.577748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.002 "name": "raid_bdev1", 00:14:55.002 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:55.002 "strip_size_kb": 64, 00:14:55.002 "state": "configuring", 00:14:55.002 "raid_level": "raid5f", 00:14:55.002 "superblock": true, 00:14:55.002 "num_base_bdevs": 3, 00:14:55.002 "num_base_bdevs_discovered": 1, 00:14:55.002 "num_base_bdevs_operational": 3, 00:14:55.002 "base_bdevs_list": [ 00:14:55.002 { 00:14:55.002 "name": "pt1", 00:14:55.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.002 "is_configured": true, 00:14:55.002 "data_offset": 2048, 00:14:55.002 "data_size": 63488 00:14:55.002 }, 00:14:55.002 { 00:14:55.002 "name": null, 00:14:55.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.002 "is_configured": false, 00:14:55.002 "data_offset": 0, 00:14:55.002 "data_size": 63488 00:14:55.002 }, 00:14:55.002 { 00:14:55.002 "name": null, 00:14:55.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.002 "is_configured": false, 00:14:55.002 "data_offset": 2048, 00:14:55.002 "data_size": 63488 00:14:55.002 } 00:14:55.002 ] 00:14:55.002 }' 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.002 19:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.261 [2024-12-12 19:43:38.052890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.261 [2024-12-12 19:43:38.052990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.261 [2024-12-12 19:43:38.053016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:55.261 [2024-12-12 19:43:38.053030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.261 [2024-12-12 19:43:38.053655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.261 [2024-12-12 19:43:38.053694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.261 [2024-12-12 19:43:38.053817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.261 [2024-12-12 19:43:38.053852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.261 pt2 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.261 [2024-12-12 19:43:38.064838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:55.261 [2024-12-12 19:43:38.064907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.261 [2024-12-12 19:43:38.064926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:55.261 [2024-12-12 19:43:38.064940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.261 [2024-12-12 19:43:38.065447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.261 [2024-12-12 19:43:38.065473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:55.261 [2024-12-12 19:43:38.065592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:55.261 [2024-12-12 19:43:38.065624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:55.261 [2024-12-12 19:43:38.065784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:55.261 [2024-12-12 19:43:38.065799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:55.261 [2024-12-12 19:43:38.066145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.261 [2024-12-12 19:43:38.071740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:55.261 [2024-12-12 19:43:38.071767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:55.261 [2024-12-12 19:43:38.072028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.261 pt3 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:55.261 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.521 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.521 "name": "raid_bdev1", 00:14:55.521 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:55.521 "strip_size_kb": 64, 00:14:55.521 "state": "online", 00:14:55.521 "raid_level": "raid5f", 00:14:55.521 "superblock": true, 00:14:55.521 "num_base_bdevs": 3, 00:14:55.521 "num_base_bdevs_discovered": 3, 00:14:55.521 "num_base_bdevs_operational": 3, 00:14:55.521 "base_bdevs_list": [ 00:14:55.521 { 00:14:55.521 "name": "pt1", 00:14:55.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.521 "is_configured": true, 00:14:55.521 "data_offset": 2048, 00:14:55.521 "data_size": 63488 00:14:55.521 }, 00:14:55.521 { 00:14:55.521 "name": "pt2", 00:14:55.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.521 "is_configured": true, 00:14:55.521 "data_offset": 2048, 00:14:55.521 "data_size": 63488 00:14:55.521 }, 00:14:55.521 { 00:14:55.521 "name": "pt3", 00:14:55.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.521 "is_configured": true, 00:14:55.521 "data_offset": 2048, 00:14:55.521 "data_size": 63488 00:14:55.521 } 00:14:55.521 ] 00:14:55.521 }' 00:14:55.521 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.521 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.781 [2024-12-12 19:43:38.555275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.781 "name": "raid_bdev1", 00:14:55.781 "aliases": [ 00:14:55.781 "c214e182-b784-41cb-bc50-b27090834f40" 00:14:55.781 ], 00:14:55.781 "product_name": "Raid Volume", 00:14:55.781 "block_size": 512, 00:14:55.781 "num_blocks": 126976, 00:14:55.781 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:55.781 "assigned_rate_limits": { 00:14:55.781 "rw_ios_per_sec": 0, 00:14:55.781 "rw_mbytes_per_sec": 0, 00:14:55.781 "r_mbytes_per_sec": 0, 00:14:55.781 "w_mbytes_per_sec": 0 00:14:55.781 }, 00:14:55.781 "claimed": false, 00:14:55.781 "zoned": false, 00:14:55.781 "supported_io_types": { 00:14:55.781 "read": true, 00:14:55.781 "write": true, 00:14:55.781 "unmap": false, 00:14:55.781 "flush": false, 00:14:55.781 "reset": true, 00:14:55.781 "nvme_admin": false, 00:14:55.781 "nvme_io": false, 00:14:55.781 "nvme_io_md": false, 00:14:55.781 "write_zeroes": true, 00:14:55.781 "zcopy": false, 00:14:55.781 "get_zone_info": false, 00:14:55.781 "zone_management": false, 00:14:55.781 "zone_append": false, 00:14:55.781 "compare": false, 00:14:55.781 "compare_and_write": false, 00:14:55.781 "abort": false, 00:14:55.781 "seek_hole": false, 00:14:55.781 "seek_data": false, 00:14:55.781 "copy": false, 00:14:55.781 "nvme_iov_md": false 00:14:55.781 }, 00:14:55.781 "driver_specific": { 00:14:55.781 "raid": { 00:14:55.781 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:55.781 "strip_size_kb": 64, 00:14:55.781 "state": "online", 00:14:55.781 "raid_level": "raid5f", 00:14:55.781 "superblock": true, 00:14:55.781 "num_base_bdevs": 3, 00:14:55.781 "num_base_bdevs_discovered": 3, 00:14:55.781 "num_base_bdevs_operational": 3, 00:14:55.781 "base_bdevs_list": [ 00:14:55.781 { 00:14:55.781 "name": "pt1", 00:14:55.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.781 "is_configured": true, 00:14:55.781 "data_offset": 2048, 00:14:55.781 "data_size": 63488 00:14:55.781 }, 00:14:55.781 { 00:14:55.781 "name": "pt2", 00:14:55.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.781 "is_configured": true, 00:14:55.781 "data_offset": 2048, 00:14:55.781 "data_size": 63488 00:14:55.781 }, 00:14:55.781 { 00:14:55.781 "name": "pt3", 00:14:55.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.781 "is_configured": true, 00:14:55.781 "data_offset": 2048, 00:14:55.781 "data_size": 63488 00:14:55.781 } 00:14:55.781 ] 00:14:55.781 } 00:14:55.781 } 00:14:55.781 }' 00:14:55.781 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:56.040 pt2 00:14:56.040 pt3' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.040 [2024-12-12 19:43:38.842783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c214e182-b784-41cb-bc50-b27090834f40 '!=' c214e182-b784-41cb-bc50-b27090834f40 ']' 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.040 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.298 [2024-12-12 19:43:38.886607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:56.298 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.299 "name": "raid_bdev1", 00:14:56.299 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:56.299 "strip_size_kb": 64, 00:14:56.299 "state": "online", 00:14:56.299 "raid_level": "raid5f", 00:14:56.299 "superblock": true, 00:14:56.299 "num_base_bdevs": 3, 00:14:56.299 "num_base_bdevs_discovered": 2, 00:14:56.299 "num_base_bdevs_operational": 2, 00:14:56.299 "base_bdevs_list": [ 00:14:56.299 { 00:14:56.299 "name": null, 00:14:56.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.299 "is_configured": false, 00:14:56.299 "data_offset": 0, 00:14:56.299 "data_size": 63488 00:14:56.299 }, 00:14:56.299 { 00:14:56.299 "name": "pt2", 00:14:56.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.299 "is_configured": true, 00:14:56.299 "data_offset": 2048, 00:14:56.299 "data_size": 63488 00:14:56.299 }, 00:14:56.299 { 00:14:56.299 "name": "pt3", 00:14:56.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.299 "is_configured": true, 00:14:56.299 "data_offset": 2048, 00:14:56.299 "data_size": 63488 00:14:56.299 } 00:14:56.299 ] 00:14:56.299 }' 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.299 19:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 [2024-12-12 19:43:39.381806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.566 [2024-12-12 19:43:39.381916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.566 [2024-12-12 19:43:39.382058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.566 [2024-12-12 19:43:39.382195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.566 [2024-12-12 19:43:39.382267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 [2024-12-12 19:43:39.469586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.835 [2024-12-12 19:43:39.469648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.835 [2024-12-12 19:43:39.469669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:56.835 [2024-12-12 19:43:39.469683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.835 [2024-12-12 19:43:39.472380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.835 [2024-12-12 19:43:39.472427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.835 [2024-12-12 19:43:39.472520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:56.835 [2024-12-12 19:43:39.472605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.835 pt2 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.835 "name": "raid_bdev1", 00:14:56.835 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:56.835 "strip_size_kb": 64, 00:14:56.835 "state": "configuring", 00:14:56.835 "raid_level": "raid5f", 00:14:56.835 "superblock": true, 00:14:56.835 "num_base_bdevs": 3, 00:14:56.835 "num_base_bdevs_discovered": 1, 00:14:56.835 "num_base_bdevs_operational": 2, 00:14:56.835 "base_bdevs_list": [ 00:14:56.835 { 00:14:56.835 "name": null, 00:14:56.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.835 "is_configured": false, 00:14:56.835 "data_offset": 2048, 00:14:56.835 "data_size": 63488 00:14:56.835 }, 00:14:56.835 { 00:14:56.835 "name": "pt2", 00:14:56.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.835 "is_configured": true, 00:14:56.835 "data_offset": 2048, 00:14:56.835 "data_size": 63488 00:14:56.835 }, 00:14:56.835 { 00:14:56.835 "name": null, 00:14:56.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.835 "is_configured": false, 00:14:56.835 "data_offset": 2048, 00:14:56.835 "data_size": 63488 00:14:56.835 } 00:14:56.835 ] 00:14:56.835 }' 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.835 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.095 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.095 [2024-12-12 19:43:39.932866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:57.095 [2024-12-12 19:43:39.933054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.095 [2024-12-12 19:43:39.933132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:57.095 [2024-12-12 19:43:39.933192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.095 [2024-12-12 19:43:39.933885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.095 [2024-12-12 19:43:39.933970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:57.095 [2024-12-12 19:43:39.934147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:57.095 [2024-12-12 19:43:39.934243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:57.095 [2024-12-12 19:43:39.934453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:57.095 [2024-12-12 19:43:39.934504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.095 [2024-12-12 19:43:39.934885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:57.356 [2024-12-12 19:43:39.940171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:57.356 [2024-12-12 19:43:39.940238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:57.356 [2024-12-12 19:43:39.940739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.356 pt3 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.356 "name": "raid_bdev1", 00:14:57.356 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:57.356 "strip_size_kb": 64, 00:14:57.356 "state": "online", 00:14:57.356 "raid_level": "raid5f", 00:14:57.356 "superblock": true, 00:14:57.356 "num_base_bdevs": 3, 00:14:57.356 "num_base_bdevs_discovered": 2, 00:14:57.356 "num_base_bdevs_operational": 2, 00:14:57.356 "base_bdevs_list": [ 00:14:57.356 { 00:14:57.356 "name": null, 00:14:57.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.356 "is_configured": false, 00:14:57.356 "data_offset": 2048, 00:14:57.356 "data_size": 63488 00:14:57.356 }, 00:14:57.356 { 00:14:57.356 "name": "pt2", 00:14:57.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.356 "is_configured": true, 00:14:57.356 "data_offset": 2048, 00:14:57.356 "data_size": 63488 00:14:57.356 }, 00:14:57.356 { 00:14:57.356 "name": "pt3", 00:14:57.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.356 "is_configured": true, 00:14:57.356 "data_offset": 2048, 00:14:57.356 "data_size": 63488 00:14:57.356 } 00:14:57.356 ] 00:14:57.356 }' 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.356 19:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 [2024-12-12 19:43:40.384403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.617 [2024-12-12 19:43:40.384459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.617 [2024-12-12 19:43:40.384594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.617 [2024-12-12 19:43:40.384681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.617 [2024-12-12 19:43:40.384694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.617 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 [2024-12-12 19:43:40.460255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.877 [2024-12-12 19:43:40.460388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.877 [2024-12-12 19:43:40.460420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:57.877 [2024-12-12 19:43:40.460431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.877 [2024-12-12 19:43:40.463305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.877 [2024-12-12 19:43:40.463409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.877 [2024-12-12 19:43:40.463563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:57.877 [2024-12-12 19:43:40.463643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.877 [2024-12-12 19:43:40.463852] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:57.877 [2024-12-12 19:43:40.463867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.877 [2024-12-12 19:43:40.463889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:57.877 [2024-12-12 19:43:40.463952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.877 pt1 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.877 "name": "raid_bdev1", 00:14:57.877 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:57.877 "strip_size_kb": 64, 00:14:57.877 "state": "configuring", 00:14:57.877 "raid_level": "raid5f", 00:14:57.877 "superblock": true, 00:14:57.877 "num_base_bdevs": 3, 00:14:57.877 "num_base_bdevs_discovered": 1, 00:14:57.877 "num_base_bdevs_operational": 2, 00:14:57.877 "base_bdevs_list": [ 00:14:57.877 { 00:14:57.877 "name": null, 00:14:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.877 "is_configured": false, 00:14:57.877 "data_offset": 2048, 00:14:57.877 "data_size": 63488 00:14:57.877 }, 00:14:57.877 { 00:14:57.877 "name": "pt2", 00:14:57.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.877 "is_configured": true, 00:14:57.877 "data_offset": 2048, 00:14:57.877 "data_size": 63488 00:14:57.877 }, 00:14:57.877 { 00:14:57.877 "name": null, 00:14:57.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.877 "is_configured": false, 00:14:57.877 "data_offset": 2048, 00:14:57.877 "data_size": 63488 00:14:57.877 } 00:14:57.877 ] 00:14:57.877 }' 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.877 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.136 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.137 [2024-12-12 19:43:40.939641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.137 [2024-12-12 19:43:40.939830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.137 [2024-12-12 19:43:40.939882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:58.137 [2024-12-12 19:43:40.939952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.137 [2024-12-12 19:43:40.940648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.137 [2024-12-12 19:43:40.940722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.137 [2024-12-12 19:43:40.940912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.137 [2024-12-12 19:43:40.940977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.137 [2024-12-12 19:43:40.941205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:58.137 [2024-12-12 19:43:40.941255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.137 [2024-12-12 19:43:40.941615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:58.137 [2024-12-12 19:43:40.947514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:58.137 [2024-12-12 19:43:40.947625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:58.137 [2024-12-12 19:43:40.947991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.137 pt3 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.137 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.397 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.397 "name": "raid_bdev1", 00:14:58.397 "uuid": "c214e182-b784-41cb-bc50-b27090834f40", 00:14:58.397 "strip_size_kb": 64, 00:14:58.397 "state": "online", 00:14:58.397 "raid_level": "raid5f", 00:14:58.397 "superblock": true, 00:14:58.397 "num_base_bdevs": 3, 00:14:58.397 "num_base_bdevs_discovered": 2, 00:14:58.397 "num_base_bdevs_operational": 2, 00:14:58.397 "base_bdevs_list": [ 00:14:58.397 { 00:14:58.397 "name": null, 00:14:58.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.397 "is_configured": false, 00:14:58.397 "data_offset": 2048, 00:14:58.397 "data_size": 63488 00:14:58.397 }, 00:14:58.397 { 00:14:58.397 "name": "pt2", 00:14:58.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.397 "is_configured": true, 00:14:58.397 "data_offset": 2048, 00:14:58.397 "data_size": 63488 00:14:58.397 }, 00:14:58.397 { 00:14:58.397 "name": "pt3", 00:14:58.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.397 "is_configured": true, 00:14:58.397 "data_offset": 2048, 00:14:58.397 "data_size": 63488 00:14:58.397 } 00:14:58.397 ] 00:14:58.397 }' 00:14:58.397 19:43:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.397 19:43:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.657 [2024-12-12 19:43:41.419015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c214e182-b784-41cb-bc50-b27090834f40 '!=' c214e182-b784-41cb-bc50-b27090834f40 ']' 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82810 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82810 ']' 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82810 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.657 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82810 00:14:58.917 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.917 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.917 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82810' 00:14:58.917 killing process with pid 82810 00:14:58.917 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 82810 00:14:58.917 [2024-12-12 19:43:41.504151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.917 19:43:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 82810 00:14:58.917 [2024-12-12 19:43:41.504303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.917 [2024-12-12 19:43:41.504390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.917 [2024-12-12 19:43:41.504405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:59.176 [2024-12-12 19:43:41.828987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.558 ************************************ 00:15:00.558 END TEST raid5f_superblock_test 00:15:00.558 ************************************ 00:15:00.558 19:43:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:00.558 00:15:00.558 real 0m8.090s 00:15:00.558 user 0m12.438s 00:15:00.558 sys 0m1.650s 00:15:00.558 19:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.558 19:43:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.558 19:43:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:00.558 19:43:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:00.558 19:43:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:00.558 19:43:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.558 19:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.558 ************************************ 00:15:00.558 START TEST raid5f_rebuild_test 00:15:00.558 ************************************ 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=83259 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 83259 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 83259 ']' 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.558 19:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.558 [2024-12-12 19:43:43.230719] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:00.558 [2024-12-12 19:43:43.230946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83259 ] 00:15:00.558 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.558 Zero copy mechanism will not be used. 00:15:00.818 [2024-12-12 19:43:43.413608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.818 [2024-12-12 19:43:43.553447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.076 [2024-12-12 19:43:43.791669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.076 [2024-12-12 19:43:43.791780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.336 BaseBdev1_malloc 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.336 [2024-12-12 19:43:44.119061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.336 [2024-12-12 19:43:44.119194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.336 [2024-12-12 19:43:44.119245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.336 [2024-12-12 19:43:44.119321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.336 [2024-12-12 19:43:44.121928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.336 [2024-12-12 19:43:44.121976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.336 BaseBdev1 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.336 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.337 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.337 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.337 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 BaseBdev2_malloc 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 [2024-12-12 19:43:44.193161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.597 [2024-12-12 19:43:44.193247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.597 [2024-12-12 19:43:44.193268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.597 [2024-12-12 19:43:44.193279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.597 [2024-12-12 19:43:44.195409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.597 [2024-12-12 19:43:44.195451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.597 BaseBdev2 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 BaseBdev3_malloc 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 [2024-12-12 19:43:44.276386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.597 [2024-12-12 19:43:44.276446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.597 [2024-12-12 19:43:44.276466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.597 [2024-12-12 19:43:44.276476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.597 [2024-12-12 19:43:44.278496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.597 [2024-12-12 19:43:44.278538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.597 BaseBdev3 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 spare_malloc 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 spare_delay 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 [2024-12-12 19:43:44.342097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.597 [2024-12-12 19:43:44.342154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.597 [2024-12-12 19:43:44.342173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:01.597 [2024-12-12 19:43:44.342183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.597 [2024-12-12 19:43:44.344198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.597 [2024-12-12 19:43:44.344313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.597 spare 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 [2024-12-12 19:43:44.354140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.597 [2024-12-12 19:43:44.355889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.597 [2024-12-12 19:43:44.355948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.597 [2024-12-12 19:43:44.356026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:01.597 [2024-12-12 19:43:44.356036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:01.597 [2024-12-12 19:43:44.356253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:01.597 [2024-12-12 19:43:44.361231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:01.597 [2024-12-12 19:43:44.361253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:01.597 [2024-12-12 19:43:44.361418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.597 "name": "raid_bdev1", 00:15:01.597 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:01.597 "strip_size_kb": 64, 00:15:01.597 "state": "online", 00:15:01.597 "raid_level": "raid5f", 00:15:01.597 "superblock": false, 00:15:01.597 "num_base_bdevs": 3, 00:15:01.597 "num_base_bdevs_discovered": 3, 00:15:01.597 "num_base_bdevs_operational": 3, 00:15:01.597 "base_bdevs_list": [ 00:15:01.597 { 00:15:01.597 "name": "BaseBdev1", 00:15:01.597 "uuid": "5c523ea0-16d2-557b-a4b8-3d9b5a1aaa48", 00:15:01.597 "is_configured": true, 00:15:01.597 "data_offset": 0, 00:15:01.597 "data_size": 65536 00:15:01.597 }, 00:15:01.597 { 00:15:01.597 "name": "BaseBdev2", 00:15:01.597 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:01.597 "is_configured": true, 00:15:01.597 "data_offset": 0, 00:15:01.597 "data_size": 65536 00:15:01.597 }, 00:15:01.597 { 00:15:01.597 "name": "BaseBdev3", 00:15:01.597 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:01.597 "is_configured": true, 00:15:01.597 "data_offset": 0, 00:15:01.597 "data_size": 65536 00:15:01.597 } 00:15:01.597 ] 00:15:01.597 }' 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.597 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.167 [2024-12-12 19:43:44.782750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.167 19:43:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.427 [2024-12-12 19:43:45.046386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:02.427 /dev/nbd0 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:02.427 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.428 1+0 records in 00:15:02.428 1+0 records out 00:15:02.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437051 s, 9.4 MB/s 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:02.428 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:02.687 512+0 records in 00:15:02.687 512+0 records out 00:15:02.687 67108864 bytes (67 MB, 64 MiB) copied, 0.381253 s, 176 MB/s 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.687 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.946 [2024-12-12 19:43:45.723666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.946 [2024-12-12 19:43:45.742713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.946 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.205 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.205 "name": "raid_bdev1", 00:15:03.205 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:03.205 "strip_size_kb": 64, 00:15:03.205 "state": "online", 00:15:03.205 "raid_level": "raid5f", 00:15:03.205 "superblock": false, 00:15:03.205 "num_base_bdevs": 3, 00:15:03.205 "num_base_bdevs_discovered": 2, 00:15:03.205 "num_base_bdevs_operational": 2, 00:15:03.205 "base_bdevs_list": [ 00:15:03.205 { 00:15:03.205 "name": null, 00:15:03.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.205 "is_configured": false, 00:15:03.205 "data_offset": 0, 00:15:03.205 "data_size": 65536 00:15:03.205 }, 00:15:03.205 { 00:15:03.205 "name": "BaseBdev2", 00:15:03.205 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:03.205 "is_configured": true, 00:15:03.205 "data_offset": 0, 00:15:03.205 "data_size": 65536 00:15:03.205 }, 00:15:03.205 { 00:15:03.205 "name": "BaseBdev3", 00:15:03.205 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:03.205 "is_configured": true, 00:15:03.205 "data_offset": 0, 00:15:03.205 "data_size": 65536 00:15:03.205 } 00:15:03.205 ] 00:15:03.205 }' 00:15:03.205 19:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.205 19:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.465 19:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.465 19:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.465 19:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.465 [2024-12-12 19:43:46.193929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.465 [2024-12-12 19:43:46.209469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:03.465 19:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.465 19:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:03.465 [2024-12-12 19:43:46.216635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.401 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.660 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.660 "name": "raid_bdev1", 00:15:04.660 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:04.660 "strip_size_kb": 64, 00:15:04.660 "state": "online", 00:15:04.660 "raid_level": "raid5f", 00:15:04.660 "superblock": false, 00:15:04.660 "num_base_bdevs": 3, 00:15:04.660 "num_base_bdevs_discovered": 3, 00:15:04.660 "num_base_bdevs_operational": 3, 00:15:04.660 "process": { 00:15:04.660 "type": "rebuild", 00:15:04.660 "target": "spare", 00:15:04.660 "progress": { 00:15:04.660 "blocks": 20480, 00:15:04.660 "percent": 15 00:15:04.660 } 00:15:04.660 }, 00:15:04.660 "base_bdevs_list": [ 00:15:04.660 { 00:15:04.660 "name": "spare", 00:15:04.660 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:04.660 "is_configured": true, 00:15:04.660 "data_offset": 0, 00:15:04.660 "data_size": 65536 00:15:04.660 }, 00:15:04.660 { 00:15:04.660 "name": "BaseBdev2", 00:15:04.660 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:04.660 "is_configured": true, 00:15:04.660 "data_offset": 0, 00:15:04.660 "data_size": 65536 00:15:04.660 }, 00:15:04.660 { 00:15:04.660 "name": "BaseBdev3", 00:15:04.660 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:04.660 "is_configured": true, 00:15:04.660 "data_offset": 0, 00:15:04.660 "data_size": 65536 00:15:04.660 } 00:15:04.660 ] 00:15:04.660 }' 00:15:04.660 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 [2024-12-12 19:43:47.375439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.661 [2024-12-12 19:43:47.424313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.661 [2024-12-12 19:43:47.424420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.661 [2024-12-12 19:43:47.424459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.661 [2024-12-12 19:43:47.424480] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.920 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.920 "name": "raid_bdev1", 00:15:04.920 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:04.920 "strip_size_kb": 64, 00:15:04.920 "state": "online", 00:15:04.920 "raid_level": "raid5f", 00:15:04.920 "superblock": false, 00:15:04.920 "num_base_bdevs": 3, 00:15:04.921 "num_base_bdevs_discovered": 2, 00:15:04.921 "num_base_bdevs_operational": 2, 00:15:04.921 "base_bdevs_list": [ 00:15:04.921 { 00:15:04.921 "name": null, 00:15:04.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.921 "is_configured": false, 00:15:04.921 "data_offset": 0, 00:15:04.921 "data_size": 65536 00:15:04.921 }, 00:15:04.921 { 00:15:04.921 "name": "BaseBdev2", 00:15:04.921 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:04.921 "is_configured": true, 00:15:04.921 "data_offset": 0, 00:15:04.921 "data_size": 65536 00:15:04.921 }, 00:15:04.921 { 00:15:04.921 "name": "BaseBdev3", 00:15:04.921 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:04.921 "is_configured": true, 00:15:04.921 "data_offset": 0, 00:15:04.921 "data_size": 65536 00:15:04.921 } 00:15:04.921 ] 00:15:04.921 }' 00:15:04.921 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.921 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.181 "name": "raid_bdev1", 00:15:05.181 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:05.181 "strip_size_kb": 64, 00:15:05.181 "state": "online", 00:15:05.181 "raid_level": "raid5f", 00:15:05.181 "superblock": false, 00:15:05.181 "num_base_bdevs": 3, 00:15:05.181 "num_base_bdevs_discovered": 2, 00:15:05.181 "num_base_bdevs_operational": 2, 00:15:05.181 "base_bdevs_list": [ 00:15:05.181 { 00:15:05.181 "name": null, 00:15:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.181 "is_configured": false, 00:15:05.181 "data_offset": 0, 00:15:05.181 "data_size": 65536 00:15:05.181 }, 00:15:05.181 { 00:15:05.181 "name": "BaseBdev2", 00:15:05.181 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:05.181 "is_configured": true, 00:15:05.181 "data_offset": 0, 00:15:05.181 "data_size": 65536 00:15:05.181 }, 00:15:05.181 { 00:15:05.181 "name": "BaseBdev3", 00:15:05.181 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:05.181 "is_configured": true, 00:15:05.181 "data_offset": 0, 00:15:05.181 "data_size": 65536 00:15:05.181 } 00:15:05.181 ] 00:15:05.181 }' 00:15:05.181 19:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.181 19:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.181 19:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.440 [2024-12-12 19:43:48.043239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.440 [2024-12-12 19:43:48.057801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.440 19:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:05.440 [2024-12-12 19:43:48.065142] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.380 "name": "raid_bdev1", 00:15:06.380 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:06.380 "strip_size_kb": 64, 00:15:06.380 "state": "online", 00:15:06.380 "raid_level": "raid5f", 00:15:06.380 "superblock": false, 00:15:06.380 "num_base_bdevs": 3, 00:15:06.380 "num_base_bdevs_discovered": 3, 00:15:06.380 "num_base_bdevs_operational": 3, 00:15:06.380 "process": { 00:15:06.380 "type": "rebuild", 00:15:06.380 "target": "spare", 00:15:06.380 "progress": { 00:15:06.380 "blocks": 20480, 00:15:06.380 "percent": 15 00:15:06.380 } 00:15:06.380 }, 00:15:06.380 "base_bdevs_list": [ 00:15:06.380 { 00:15:06.380 "name": "spare", 00:15:06.380 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev2", 00:15:06.380 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev3", 00:15:06.380 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 } 00:15:06.380 ] 00:15:06.380 }' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.380 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.640 "name": "raid_bdev1", 00:15:06.640 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:06.640 "strip_size_kb": 64, 00:15:06.640 "state": "online", 00:15:06.640 "raid_level": "raid5f", 00:15:06.640 "superblock": false, 00:15:06.640 "num_base_bdevs": 3, 00:15:06.640 "num_base_bdevs_discovered": 3, 00:15:06.640 "num_base_bdevs_operational": 3, 00:15:06.640 "process": { 00:15:06.640 "type": "rebuild", 00:15:06.640 "target": "spare", 00:15:06.640 "progress": { 00:15:06.640 "blocks": 22528, 00:15:06.640 "percent": 17 00:15:06.640 } 00:15:06.640 }, 00:15:06.640 "base_bdevs_list": [ 00:15:06.640 { 00:15:06.640 "name": "spare", 00:15:06.640 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:06.640 "is_configured": true, 00:15:06.640 "data_offset": 0, 00:15:06.640 "data_size": 65536 00:15:06.640 }, 00:15:06.640 { 00:15:06.640 "name": "BaseBdev2", 00:15:06.640 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:06.640 "is_configured": true, 00:15:06.640 "data_offset": 0, 00:15:06.640 "data_size": 65536 00:15:06.640 }, 00:15:06.640 { 00:15:06.640 "name": "BaseBdev3", 00:15:06.640 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:06.640 "is_configured": true, 00:15:06.640 "data_offset": 0, 00:15:06.640 "data_size": 65536 00:15:06.640 } 00:15:06.640 ] 00:15:06.640 }' 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.640 19:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.578 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.578 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.579 "name": "raid_bdev1", 00:15:07.579 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:07.579 "strip_size_kb": 64, 00:15:07.579 "state": "online", 00:15:07.579 "raid_level": "raid5f", 00:15:07.579 "superblock": false, 00:15:07.579 "num_base_bdevs": 3, 00:15:07.579 "num_base_bdevs_discovered": 3, 00:15:07.579 "num_base_bdevs_operational": 3, 00:15:07.579 "process": { 00:15:07.579 "type": "rebuild", 00:15:07.579 "target": "spare", 00:15:07.579 "progress": { 00:15:07.579 "blocks": 45056, 00:15:07.579 "percent": 34 00:15:07.579 } 00:15:07.579 }, 00:15:07.579 "base_bdevs_list": [ 00:15:07.579 { 00:15:07.579 "name": "spare", 00:15:07.579 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:07.579 "is_configured": true, 00:15:07.579 "data_offset": 0, 00:15:07.579 "data_size": 65536 00:15:07.579 }, 00:15:07.579 { 00:15:07.579 "name": "BaseBdev2", 00:15:07.579 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:07.579 "is_configured": true, 00:15:07.579 "data_offset": 0, 00:15:07.579 "data_size": 65536 00:15:07.579 }, 00:15:07.579 { 00:15:07.579 "name": "BaseBdev3", 00:15:07.579 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:07.579 "is_configured": true, 00:15:07.579 "data_offset": 0, 00:15:07.579 "data_size": 65536 00:15:07.579 } 00:15:07.579 ] 00:15:07.579 }' 00:15:07.579 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.838 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.838 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.838 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.838 19:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.777 "name": "raid_bdev1", 00:15:08.777 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:08.777 "strip_size_kb": 64, 00:15:08.777 "state": "online", 00:15:08.777 "raid_level": "raid5f", 00:15:08.777 "superblock": false, 00:15:08.777 "num_base_bdevs": 3, 00:15:08.777 "num_base_bdevs_discovered": 3, 00:15:08.777 "num_base_bdevs_operational": 3, 00:15:08.777 "process": { 00:15:08.777 "type": "rebuild", 00:15:08.777 "target": "spare", 00:15:08.777 "progress": { 00:15:08.777 "blocks": 69632, 00:15:08.777 "percent": 53 00:15:08.777 } 00:15:08.777 }, 00:15:08.777 "base_bdevs_list": [ 00:15:08.777 { 00:15:08.777 "name": "spare", 00:15:08.777 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:08.777 "is_configured": true, 00:15:08.777 "data_offset": 0, 00:15:08.777 "data_size": 65536 00:15:08.777 }, 00:15:08.777 { 00:15:08.777 "name": "BaseBdev2", 00:15:08.777 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:08.777 "is_configured": true, 00:15:08.777 "data_offset": 0, 00:15:08.777 "data_size": 65536 00:15:08.777 }, 00:15:08.777 { 00:15:08.777 "name": "BaseBdev3", 00:15:08.777 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:08.777 "is_configured": true, 00:15:08.777 "data_offset": 0, 00:15:08.777 "data_size": 65536 00:15:08.777 } 00:15:08.777 ] 00:15:08.777 }' 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.777 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.037 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.037 19:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.975 "name": "raid_bdev1", 00:15:09.975 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:09.975 "strip_size_kb": 64, 00:15:09.975 "state": "online", 00:15:09.975 "raid_level": "raid5f", 00:15:09.975 "superblock": false, 00:15:09.975 "num_base_bdevs": 3, 00:15:09.975 "num_base_bdevs_discovered": 3, 00:15:09.975 "num_base_bdevs_operational": 3, 00:15:09.975 "process": { 00:15:09.975 "type": "rebuild", 00:15:09.975 "target": "spare", 00:15:09.975 "progress": { 00:15:09.975 "blocks": 92160, 00:15:09.975 "percent": 70 00:15:09.975 } 00:15:09.975 }, 00:15:09.975 "base_bdevs_list": [ 00:15:09.975 { 00:15:09.975 "name": "spare", 00:15:09.975 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:09.975 "is_configured": true, 00:15:09.975 "data_offset": 0, 00:15:09.975 "data_size": 65536 00:15:09.975 }, 00:15:09.975 { 00:15:09.975 "name": "BaseBdev2", 00:15:09.975 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:09.975 "is_configured": true, 00:15:09.975 "data_offset": 0, 00:15:09.975 "data_size": 65536 00:15:09.975 }, 00:15:09.975 { 00:15:09.975 "name": "BaseBdev3", 00:15:09.975 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:09.975 "is_configured": true, 00:15:09.975 "data_offset": 0, 00:15:09.975 "data_size": 65536 00:15:09.975 } 00:15:09.975 ] 00:15:09.975 }' 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.975 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.235 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.235 19:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.174 "name": "raid_bdev1", 00:15:11.174 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:11.174 "strip_size_kb": 64, 00:15:11.174 "state": "online", 00:15:11.174 "raid_level": "raid5f", 00:15:11.174 "superblock": false, 00:15:11.174 "num_base_bdevs": 3, 00:15:11.174 "num_base_bdevs_discovered": 3, 00:15:11.174 "num_base_bdevs_operational": 3, 00:15:11.174 "process": { 00:15:11.174 "type": "rebuild", 00:15:11.174 "target": "spare", 00:15:11.174 "progress": { 00:15:11.174 "blocks": 116736, 00:15:11.174 "percent": 89 00:15:11.174 } 00:15:11.174 }, 00:15:11.174 "base_bdevs_list": [ 00:15:11.174 { 00:15:11.174 "name": "spare", 00:15:11.174 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:11.174 "is_configured": true, 00:15:11.174 "data_offset": 0, 00:15:11.174 "data_size": 65536 00:15:11.174 }, 00:15:11.174 { 00:15:11.174 "name": "BaseBdev2", 00:15:11.174 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:11.174 "is_configured": true, 00:15:11.174 "data_offset": 0, 00:15:11.174 "data_size": 65536 00:15:11.174 }, 00:15:11.174 { 00:15:11.174 "name": "BaseBdev3", 00:15:11.174 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:11.174 "is_configured": true, 00:15:11.174 "data_offset": 0, 00:15:11.174 "data_size": 65536 00:15:11.174 } 00:15:11.174 ] 00:15:11.174 }' 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.174 19:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.744 [2024-12-12 19:43:54.502360] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.744 [2024-12-12 19:43:54.502437] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.744 [2024-12-12 19:43:54.502479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.318 19:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.319 "name": "raid_bdev1", 00:15:12.319 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:12.319 "strip_size_kb": 64, 00:15:12.319 "state": "online", 00:15:12.319 "raid_level": "raid5f", 00:15:12.319 "superblock": false, 00:15:12.319 "num_base_bdevs": 3, 00:15:12.319 "num_base_bdevs_discovered": 3, 00:15:12.319 "num_base_bdevs_operational": 3, 00:15:12.319 "base_bdevs_list": [ 00:15:12.319 { 00:15:12.319 "name": "spare", 00:15:12.319 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 }, 00:15:12.319 { 00:15:12.319 "name": "BaseBdev2", 00:15:12.319 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 }, 00:15:12.319 { 00:15:12.319 "name": "BaseBdev3", 00:15:12.319 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 } 00:15:12.319 ] 00:15:12.319 }' 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.319 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.592 "name": "raid_bdev1", 00:15:12.592 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:12.592 "strip_size_kb": 64, 00:15:12.592 "state": "online", 00:15:12.592 "raid_level": "raid5f", 00:15:12.592 "superblock": false, 00:15:12.592 "num_base_bdevs": 3, 00:15:12.592 "num_base_bdevs_discovered": 3, 00:15:12.592 "num_base_bdevs_operational": 3, 00:15:12.592 "base_bdevs_list": [ 00:15:12.592 { 00:15:12.592 "name": "spare", 00:15:12.592 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:12.592 "is_configured": true, 00:15:12.592 "data_offset": 0, 00:15:12.592 "data_size": 65536 00:15:12.592 }, 00:15:12.592 { 00:15:12.592 "name": "BaseBdev2", 00:15:12.592 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:12.592 "is_configured": true, 00:15:12.592 "data_offset": 0, 00:15:12.592 "data_size": 65536 00:15:12.592 }, 00:15:12.592 { 00:15:12.592 "name": "BaseBdev3", 00:15:12.592 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:12.592 "is_configured": true, 00:15:12.592 "data_offset": 0, 00:15:12.592 "data_size": 65536 00:15:12.592 } 00:15:12.592 ] 00:15:12.592 }' 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.592 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.592 "name": "raid_bdev1", 00:15:12.592 "uuid": "70792bd6-71b5-4426-9b1f-e85de16e28e1", 00:15:12.592 "strip_size_kb": 64, 00:15:12.592 "state": "online", 00:15:12.592 "raid_level": "raid5f", 00:15:12.592 "superblock": false, 00:15:12.592 "num_base_bdevs": 3, 00:15:12.592 "num_base_bdevs_discovered": 3, 00:15:12.592 "num_base_bdevs_operational": 3, 00:15:12.592 "base_bdevs_list": [ 00:15:12.592 { 00:15:12.592 "name": "spare", 00:15:12.592 "uuid": "a2ab8582-1563-5b2b-920e-78b011713ebe", 00:15:12.592 "is_configured": true, 00:15:12.592 "data_offset": 0, 00:15:12.592 "data_size": 65536 00:15:12.592 }, 00:15:12.592 { 00:15:12.592 "name": "BaseBdev2", 00:15:12.592 "uuid": "8fda0b50-bca9-53c9-9d1f-6d5d871d9258", 00:15:12.593 "is_configured": true, 00:15:12.593 "data_offset": 0, 00:15:12.593 "data_size": 65536 00:15:12.593 }, 00:15:12.593 { 00:15:12.593 "name": "BaseBdev3", 00:15:12.593 "uuid": "e78aaa7f-cf14-5de2-be37-ace012562212", 00:15:12.593 "is_configured": true, 00:15:12.593 "data_offset": 0, 00:15:12.593 "data_size": 65536 00:15:12.593 } 00:15:12.593 ] 00:15:12.593 }' 00:15:12.593 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.593 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.178 [2024-12-12 19:43:55.750618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.178 [2024-12-12 19:43:55.750647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.178 [2024-12-12 19:43:55.750730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.178 [2024-12-12 19:43:55.750812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.178 [2024-12-12 19:43:55.750827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:13.178 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.179 19:43:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:13.179 /dev/nbd0 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.438 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.438 1+0 records in 00:15:13.438 1+0 records out 00:15:13.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433826 s, 9.4 MB/s 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:13.439 /dev/nbd1 00:15:13.439 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.699 1+0 records in 00:15:13.699 1+0 records out 00:15:13.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440535 s, 9.3 MB/s 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.699 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.958 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:13.959 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.959 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.959 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 83259 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 83259 ']' 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 83259 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83259 00:15:14.219 killing process with pid 83259 00:15:14.219 Received shutdown signal, test time was about 60.000000 seconds 00:15:14.219 00:15:14.219 Latency(us) 00:15:14.219 [2024-12-12T19:43:57.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.219 [2024-12-12T19:43:57.064Z] =================================================================================================================== 00:15:14.219 [2024-12-12T19:43:57.064Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83259' 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 83259 00:15:14.219 [2024-12-12 19:43:56.903400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.219 19:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 83259 00:15:14.479 [2024-12-12 19:43:57.275519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:15.860 00:15:15.860 real 0m15.214s 00:15:15.860 user 0m18.543s 00:15:15.860 sys 0m2.171s 00:15:15.860 ************************************ 00:15:15.860 END TEST raid5f_rebuild_test 00:15:15.860 ************************************ 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.860 19:43:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:15.860 19:43:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:15.860 19:43:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.860 19:43:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.860 ************************************ 00:15:15.860 START TEST raid5f_rebuild_test_sb 00:15:15.860 ************************************ 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83695 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83695 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83695 ']' 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.860 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.861 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.861 19:43:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.861 [2024-12-12 19:43:58.511946] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:15.861 [2024-12-12 19:43:58.512164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.861 Zero copy mechanism will not be used. 00:15:15.861 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83695 ] 00:15:15.861 [2024-12-12 19:43:58.688849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.120 [2024-12-12 19:43:58.798180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.380 [2024-12-12 19:43:58.986775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.380 [2024-12-12 19:43:58.986908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 BaseBdev1_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 [2024-12-12 19:43:59.358407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.641 [2024-12-12 19:43:59.358509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.641 [2024-12-12 19:43:59.358560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.641 [2024-12-12 19:43:59.358593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.641 [2024-12-12 19:43:59.360612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.641 [2024-12-12 19:43:59.360682] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.641 BaseBdev1 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 BaseBdev2_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 [2024-12-12 19:43:59.410720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:16.641 [2024-12-12 19:43:59.410816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.641 [2024-12-12 19:43:59.410850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.641 [2024-12-12 19:43:59.410880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.641 [2024-12-12 19:43:59.412821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.641 [2024-12-12 19:43:59.412889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.641 BaseBdev2 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 BaseBdev3_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.641 [2024-12-12 19:43:59.477402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:16.641 [2024-12-12 19:43:59.477453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.641 [2024-12-12 19:43:59.477473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.641 [2024-12-12 19:43:59.477483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.641 [2024-12-12 19:43:59.479517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.641 [2024-12-12 19:43:59.479577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.641 BaseBdev3 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.641 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 spare_malloc 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 spare_delay 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 [2024-12-12 19:43:59.540366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.901 [2024-12-12 19:43:59.540417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.901 [2024-12-12 19:43:59.540435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:16.901 [2024-12-12 19:43:59.540445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.901 [2024-12-12 19:43:59.542389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.901 [2024-12-12 19:43:59.542433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.901 spare 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.901 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 [2024-12-12 19:43:59.552410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.901 [2024-12-12 19:43:59.554100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.901 [2024-12-12 19:43:59.554160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.901 [2024-12-12 19:43:59.554345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.902 [2024-12-12 19:43:59.554358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.902 [2024-12-12 19:43:59.554592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.902 [2024-12-12 19:43:59.560195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.902 [2024-12-12 19:43:59.560229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.902 [2024-12-12 19:43:59.560408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.902 "name": "raid_bdev1", 00:15:16.902 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:16.902 "strip_size_kb": 64, 00:15:16.902 "state": "online", 00:15:16.902 "raid_level": "raid5f", 00:15:16.902 "superblock": true, 00:15:16.902 "num_base_bdevs": 3, 00:15:16.902 "num_base_bdevs_discovered": 3, 00:15:16.902 "num_base_bdevs_operational": 3, 00:15:16.902 "base_bdevs_list": [ 00:15:16.902 { 00:15:16.902 "name": "BaseBdev1", 00:15:16.902 "uuid": "e5be10dd-19f1-5881-9694-66f459b1d620", 00:15:16.902 "is_configured": true, 00:15:16.902 "data_offset": 2048, 00:15:16.902 "data_size": 63488 00:15:16.902 }, 00:15:16.902 { 00:15:16.902 "name": "BaseBdev2", 00:15:16.902 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:16.902 "is_configured": true, 00:15:16.902 "data_offset": 2048, 00:15:16.902 "data_size": 63488 00:15:16.902 }, 00:15:16.902 { 00:15:16.902 "name": "BaseBdev3", 00:15:16.902 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:16.902 "is_configured": true, 00:15:16.902 "data_offset": 2048, 00:15:16.902 "data_size": 63488 00:15:16.902 } 00:15:16.902 ] 00:15:16.902 }' 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.902 19:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.471 [2024-12-12 19:44:00.018102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:17.471 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:17.472 [2024-12-12 19:44:00.277561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:17.472 /dev/nbd0 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.732 1+0 records in 00:15:17.732 1+0 records out 00:15:17.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419847 s, 9.8 MB/s 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:17.732 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:17.991 496+0 records in 00:15:17.991 496+0 records out 00:15:17.991 65011712 bytes (65 MB, 62 MiB) copied, 0.349957 s, 186 MB/s 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.991 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.251 [2024-12-12 19:44:00.922688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.251 [2024-12-12 19:44:00.937615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.251 "name": "raid_bdev1", 00:15:18.251 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:18.251 "strip_size_kb": 64, 00:15:18.251 "state": "online", 00:15:18.251 "raid_level": "raid5f", 00:15:18.251 "superblock": true, 00:15:18.251 "num_base_bdevs": 3, 00:15:18.251 "num_base_bdevs_discovered": 2, 00:15:18.251 "num_base_bdevs_operational": 2, 00:15:18.251 "base_bdevs_list": [ 00:15:18.251 { 00:15:18.251 "name": null, 00:15:18.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.251 "is_configured": false, 00:15:18.251 "data_offset": 0, 00:15:18.251 "data_size": 63488 00:15:18.251 }, 00:15:18.251 { 00:15:18.251 "name": "BaseBdev2", 00:15:18.251 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:18.251 "is_configured": true, 00:15:18.251 "data_offset": 2048, 00:15:18.251 "data_size": 63488 00:15:18.251 }, 00:15:18.251 { 00:15:18.251 "name": "BaseBdev3", 00:15:18.251 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:18.251 "is_configured": true, 00:15:18.251 "data_offset": 2048, 00:15:18.251 "data_size": 63488 00:15:18.251 } 00:15:18.251 ] 00:15:18.251 }' 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.251 19:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.820 19:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.820 19:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.820 19:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.820 [2024-12-12 19:44:01.368859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.820 [2024-12-12 19:44:01.384497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:18.820 19:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.820 19:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:18.820 [2024-12-12 19:44:01.391176] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.757 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.757 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.758 "name": "raid_bdev1", 00:15:19.758 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:19.758 "strip_size_kb": 64, 00:15:19.758 "state": "online", 00:15:19.758 "raid_level": "raid5f", 00:15:19.758 "superblock": true, 00:15:19.758 "num_base_bdevs": 3, 00:15:19.758 "num_base_bdevs_discovered": 3, 00:15:19.758 "num_base_bdevs_operational": 3, 00:15:19.758 "process": { 00:15:19.758 "type": "rebuild", 00:15:19.758 "target": "spare", 00:15:19.758 "progress": { 00:15:19.758 "blocks": 20480, 00:15:19.758 "percent": 16 00:15:19.758 } 00:15:19.758 }, 00:15:19.758 "base_bdevs_list": [ 00:15:19.758 { 00:15:19.758 "name": "spare", 00:15:19.758 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:19.758 "is_configured": true, 00:15:19.758 "data_offset": 2048, 00:15:19.758 "data_size": 63488 00:15:19.758 }, 00:15:19.758 { 00:15:19.758 "name": "BaseBdev2", 00:15:19.758 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:19.758 "is_configured": true, 00:15:19.758 "data_offset": 2048, 00:15:19.758 "data_size": 63488 00:15:19.758 }, 00:15:19.758 { 00:15:19.758 "name": "BaseBdev3", 00:15:19.758 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:19.758 "is_configured": true, 00:15:19.758 "data_offset": 2048, 00:15:19.758 "data_size": 63488 00:15:19.758 } 00:15:19.758 ] 00:15:19.758 }' 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.758 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.758 [2024-12-12 19:44:02.542601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.758 [2024-12-12 19:44:02.598704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.758 [2024-12-12 19:44:02.598760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.758 [2024-12-12 19:44:02.598778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.758 [2024-12-12 19:44:02.598786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.018 "name": "raid_bdev1", 00:15:20.018 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:20.018 "strip_size_kb": 64, 00:15:20.018 "state": "online", 00:15:20.018 "raid_level": "raid5f", 00:15:20.018 "superblock": true, 00:15:20.018 "num_base_bdevs": 3, 00:15:20.018 "num_base_bdevs_discovered": 2, 00:15:20.018 "num_base_bdevs_operational": 2, 00:15:20.018 "base_bdevs_list": [ 00:15:20.018 { 00:15:20.018 "name": null, 00:15:20.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.018 "is_configured": false, 00:15:20.018 "data_offset": 0, 00:15:20.018 "data_size": 63488 00:15:20.018 }, 00:15:20.018 { 00:15:20.018 "name": "BaseBdev2", 00:15:20.018 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:20.018 "is_configured": true, 00:15:20.018 "data_offset": 2048, 00:15:20.018 "data_size": 63488 00:15:20.018 }, 00:15:20.018 { 00:15:20.018 "name": "BaseBdev3", 00:15:20.018 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:20.018 "is_configured": true, 00:15:20.018 "data_offset": 2048, 00:15:20.018 "data_size": 63488 00:15:20.018 } 00:15:20.018 ] 00:15:20.018 }' 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.018 19:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.277 "name": "raid_bdev1", 00:15:20.277 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:20.277 "strip_size_kb": 64, 00:15:20.277 "state": "online", 00:15:20.277 "raid_level": "raid5f", 00:15:20.277 "superblock": true, 00:15:20.277 "num_base_bdevs": 3, 00:15:20.277 "num_base_bdevs_discovered": 2, 00:15:20.277 "num_base_bdevs_operational": 2, 00:15:20.277 "base_bdevs_list": [ 00:15:20.277 { 00:15:20.277 "name": null, 00:15:20.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.277 "is_configured": false, 00:15:20.277 "data_offset": 0, 00:15:20.277 "data_size": 63488 00:15:20.277 }, 00:15:20.277 { 00:15:20.277 "name": "BaseBdev2", 00:15:20.277 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:20.277 "is_configured": true, 00:15:20.277 "data_offset": 2048, 00:15:20.277 "data_size": 63488 00:15:20.277 }, 00:15:20.277 { 00:15:20.277 "name": "BaseBdev3", 00:15:20.277 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:20.277 "is_configured": true, 00:15:20.277 "data_offset": 2048, 00:15:20.277 "data_size": 63488 00:15:20.277 } 00:15:20.277 ] 00:15:20.277 }' 00:15:20.277 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.537 [2024-12-12 19:44:03.179885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.537 [2024-12-12 19:44:03.195339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.537 19:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:20.537 [2024-12-12 19:44:03.202727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.476 "name": "raid_bdev1", 00:15:21.476 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:21.476 "strip_size_kb": 64, 00:15:21.476 "state": "online", 00:15:21.476 "raid_level": "raid5f", 00:15:21.476 "superblock": true, 00:15:21.476 "num_base_bdevs": 3, 00:15:21.476 "num_base_bdevs_discovered": 3, 00:15:21.476 "num_base_bdevs_operational": 3, 00:15:21.476 "process": { 00:15:21.476 "type": "rebuild", 00:15:21.476 "target": "spare", 00:15:21.476 "progress": { 00:15:21.476 "blocks": 20480, 00:15:21.476 "percent": 16 00:15:21.476 } 00:15:21.476 }, 00:15:21.476 "base_bdevs_list": [ 00:15:21.476 { 00:15:21.476 "name": "spare", 00:15:21.476 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:21.476 "is_configured": true, 00:15:21.476 "data_offset": 2048, 00:15:21.476 "data_size": 63488 00:15:21.476 }, 00:15:21.476 { 00:15:21.476 "name": "BaseBdev2", 00:15:21.476 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:21.476 "is_configured": true, 00:15:21.476 "data_offset": 2048, 00:15:21.476 "data_size": 63488 00:15:21.476 }, 00:15:21.476 { 00:15:21.476 "name": "BaseBdev3", 00:15:21.476 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:21.476 "is_configured": true, 00:15:21.476 "data_offset": 2048, 00:15:21.476 "data_size": 63488 00:15:21.476 } 00:15:21.476 ] 00:15:21.476 }' 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.476 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:21.737 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.737 "name": "raid_bdev1", 00:15:21.737 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:21.737 "strip_size_kb": 64, 00:15:21.737 "state": "online", 00:15:21.737 "raid_level": "raid5f", 00:15:21.737 "superblock": true, 00:15:21.737 "num_base_bdevs": 3, 00:15:21.737 "num_base_bdevs_discovered": 3, 00:15:21.737 "num_base_bdevs_operational": 3, 00:15:21.737 "process": { 00:15:21.737 "type": "rebuild", 00:15:21.737 "target": "spare", 00:15:21.737 "progress": { 00:15:21.737 "blocks": 22528, 00:15:21.737 "percent": 17 00:15:21.737 } 00:15:21.737 }, 00:15:21.737 "base_bdevs_list": [ 00:15:21.737 { 00:15:21.737 "name": "spare", 00:15:21.737 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:21.737 "is_configured": true, 00:15:21.737 "data_offset": 2048, 00:15:21.737 "data_size": 63488 00:15:21.737 }, 00:15:21.737 { 00:15:21.737 "name": "BaseBdev2", 00:15:21.737 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:21.737 "is_configured": true, 00:15:21.737 "data_offset": 2048, 00:15:21.737 "data_size": 63488 00:15:21.737 }, 00:15:21.737 { 00:15:21.737 "name": "BaseBdev3", 00:15:21.737 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:21.737 "is_configured": true, 00:15:21.737 "data_offset": 2048, 00:15:21.737 "data_size": 63488 00:15:21.737 } 00:15:21.737 ] 00:15:21.737 }' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.737 19:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.675 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.676 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.676 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.935 "name": "raid_bdev1", 00:15:22.935 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:22.935 "strip_size_kb": 64, 00:15:22.935 "state": "online", 00:15:22.935 "raid_level": "raid5f", 00:15:22.935 "superblock": true, 00:15:22.935 "num_base_bdevs": 3, 00:15:22.935 "num_base_bdevs_discovered": 3, 00:15:22.935 "num_base_bdevs_operational": 3, 00:15:22.935 "process": { 00:15:22.935 "type": "rebuild", 00:15:22.935 "target": "spare", 00:15:22.935 "progress": { 00:15:22.935 "blocks": 45056, 00:15:22.935 "percent": 35 00:15:22.935 } 00:15:22.935 }, 00:15:22.935 "base_bdevs_list": [ 00:15:22.935 { 00:15:22.935 "name": "spare", 00:15:22.935 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 2048, 00:15:22.935 "data_size": 63488 00:15:22.935 }, 00:15:22.935 { 00:15:22.935 "name": "BaseBdev2", 00:15:22.935 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 2048, 00:15:22.935 "data_size": 63488 00:15:22.935 }, 00:15:22.935 { 00:15:22.935 "name": "BaseBdev3", 00:15:22.935 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 2048, 00:15:22.935 "data_size": 63488 00:15:22.935 } 00:15:22.935 ] 00:15:22.935 }' 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.935 19:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.874 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.874 "name": "raid_bdev1", 00:15:23.874 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:23.874 "strip_size_kb": 64, 00:15:23.874 "state": "online", 00:15:23.874 "raid_level": "raid5f", 00:15:23.875 "superblock": true, 00:15:23.875 "num_base_bdevs": 3, 00:15:23.875 "num_base_bdevs_discovered": 3, 00:15:23.875 "num_base_bdevs_operational": 3, 00:15:23.875 "process": { 00:15:23.875 "type": "rebuild", 00:15:23.875 "target": "spare", 00:15:23.875 "progress": { 00:15:23.875 "blocks": 69632, 00:15:23.875 "percent": 54 00:15:23.875 } 00:15:23.875 }, 00:15:23.875 "base_bdevs_list": [ 00:15:23.875 { 00:15:23.875 "name": "spare", 00:15:23.875 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:23.875 "is_configured": true, 00:15:23.875 "data_offset": 2048, 00:15:23.875 "data_size": 63488 00:15:23.875 }, 00:15:23.875 { 00:15:23.875 "name": "BaseBdev2", 00:15:23.875 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:23.875 "is_configured": true, 00:15:23.875 "data_offset": 2048, 00:15:23.875 "data_size": 63488 00:15:23.875 }, 00:15:23.875 { 00:15:23.875 "name": "BaseBdev3", 00:15:23.875 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:23.875 "is_configured": true, 00:15:23.875 "data_offset": 2048, 00:15:23.875 "data_size": 63488 00:15:23.875 } 00:15:23.875 ] 00:15:23.875 }' 00:15:23.875 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.875 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.875 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.134 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.134 19:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.074 "name": "raid_bdev1", 00:15:25.074 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:25.074 "strip_size_kb": 64, 00:15:25.074 "state": "online", 00:15:25.074 "raid_level": "raid5f", 00:15:25.074 "superblock": true, 00:15:25.074 "num_base_bdevs": 3, 00:15:25.074 "num_base_bdevs_discovered": 3, 00:15:25.074 "num_base_bdevs_operational": 3, 00:15:25.074 "process": { 00:15:25.074 "type": "rebuild", 00:15:25.074 "target": "spare", 00:15:25.074 "progress": { 00:15:25.074 "blocks": 92160, 00:15:25.074 "percent": 72 00:15:25.074 } 00:15:25.074 }, 00:15:25.074 "base_bdevs_list": [ 00:15:25.074 { 00:15:25.074 "name": "spare", 00:15:25.074 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:25.074 "is_configured": true, 00:15:25.074 "data_offset": 2048, 00:15:25.074 "data_size": 63488 00:15:25.074 }, 00:15:25.074 { 00:15:25.074 "name": "BaseBdev2", 00:15:25.074 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:25.074 "is_configured": true, 00:15:25.074 "data_offset": 2048, 00:15:25.074 "data_size": 63488 00:15:25.074 }, 00:15:25.074 { 00:15:25.074 "name": "BaseBdev3", 00:15:25.074 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:25.074 "is_configured": true, 00:15:25.074 "data_offset": 2048, 00:15:25.074 "data_size": 63488 00:15:25.074 } 00:15:25.074 ] 00:15:25.074 }' 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.074 19:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.456 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.456 "name": "raid_bdev1", 00:15:26.456 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:26.456 "strip_size_kb": 64, 00:15:26.456 "state": "online", 00:15:26.456 "raid_level": "raid5f", 00:15:26.456 "superblock": true, 00:15:26.456 "num_base_bdevs": 3, 00:15:26.456 "num_base_bdevs_discovered": 3, 00:15:26.456 "num_base_bdevs_operational": 3, 00:15:26.456 "process": { 00:15:26.456 "type": "rebuild", 00:15:26.456 "target": "spare", 00:15:26.456 "progress": { 00:15:26.456 "blocks": 114688, 00:15:26.456 "percent": 90 00:15:26.456 } 00:15:26.456 }, 00:15:26.456 "base_bdevs_list": [ 00:15:26.456 { 00:15:26.456 "name": "spare", 00:15:26.456 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:26.456 "is_configured": true, 00:15:26.456 "data_offset": 2048, 00:15:26.456 "data_size": 63488 00:15:26.456 }, 00:15:26.456 { 00:15:26.456 "name": "BaseBdev2", 00:15:26.456 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:26.456 "is_configured": true, 00:15:26.456 "data_offset": 2048, 00:15:26.456 "data_size": 63488 00:15:26.456 }, 00:15:26.456 { 00:15:26.456 "name": "BaseBdev3", 00:15:26.456 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:26.456 "is_configured": true, 00:15:26.456 "data_offset": 2048, 00:15:26.456 "data_size": 63488 00:15:26.457 } 00:15:26.457 ] 00:15:26.457 }' 00:15:26.457 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.457 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.457 19:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.457 19:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.457 19:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.716 [2024-12-12 19:44:09.439041] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:26.716 [2024-12-12 19:44:09.439148] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:26.716 [2024-12-12 19:44:09.439268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.286 "name": "raid_bdev1", 00:15:27.286 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:27.286 "strip_size_kb": 64, 00:15:27.286 "state": "online", 00:15:27.286 "raid_level": "raid5f", 00:15:27.286 "superblock": true, 00:15:27.286 "num_base_bdevs": 3, 00:15:27.286 "num_base_bdevs_discovered": 3, 00:15:27.286 "num_base_bdevs_operational": 3, 00:15:27.286 "base_bdevs_list": [ 00:15:27.286 { 00:15:27.286 "name": "spare", 00:15:27.286 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:27.286 "is_configured": true, 00:15:27.286 "data_offset": 2048, 00:15:27.286 "data_size": 63488 00:15:27.286 }, 00:15:27.286 { 00:15:27.286 "name": "BaseBdev2", 00:15:27.286 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:27.286 "is_configured": true, 00:15:27.286 "data_offset": 2048, 00:15:27.286 "data_size": 63488 00:15:27.286 }, 00:15:27.286 { 00:15:27.286 "name": "BaseBdev3", 00:15:27.286 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:27.286 "is_configured": true, 00:15:27.286 "data_offset": 2048, 00:15:27.286 "data_size": 63488 00:15:27.286 } 00:15:27.286 ] 00:15:27.286 }' 00:15:27.286 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.546 "name": "raid_bdev1", 00:15:27.546 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:27.546 "strip_size_kb": 64, 00:15:27.546 "state": "online", 00:15:27.546 "raid_level": "raid5f", 00:15:27.546 "superblock": true, 00:15:27.546 "num_base_bdevs": 3, 00:15:27.546 "num_base_bdevs_discovered": 3, 00:15:27.546 "num_base_bdevs_operational": 3, 00:15:27.546 "base_bdevs_list": [ 00:15:27.546 { 00:15:27.546 "name": "spare", 00:15:27.546 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:27.546 "is_configured": true, 00:15:27.546 "data_offset": 2048, 00:15:27.546 "data_size": 63488 00:15:27.546 }, 00:15:27.546 { 00:15:27.546 "name": "BaseBdev2", 00:15:27.546 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:27.546 "is_configured": true, 00:15:27.546 "data_offset": 2048, 00:15:27.546 "data_size": 63488 00:15:27.546 }, 00:15:27.546 { 00:15:27.546 "name": "BaseBdev3", 00:15:27.546 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:27.546 "is_configured": true, 00:15:27.546 "data_offset": 2048, 00:15:27.546 "data_size": 63488 00:15:27.546 } 00:15:27.546 ] 00:15:27.546 }' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.546 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.547 "name": "raid_bdev1", 00:15:27.547 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:27.547 "strip_size_kb": 64, 00:15:27.547 "state": "online", 00:15:27.547 "raid_level": "raid5f", 00:15:27.547 "superblock": true, 00:15:27.547 "num_base_bdevs": 3, 00:15:27.547 "num_base_bdevs_discovered": 3, 00:15:27.547 "num_base_bdevs_operational": 3, 00:15:27.547 "base_bdevs_list": [ 00:15:27.547 { 00:15:27.547 "name": "spare", 00:15:27.547 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:27.547 "is_configured": true, 00:15:27.547 "data_offset": 2048, 00:15:27.547 "data_size": 63488 00:15:27.547 }, 00:15:27.547 { 00:15:27.547 "name": "BaseBdev2", 00:15:27.547 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:27.547 "is_configured": true, 00:15:27.547 "data_offset": 2048, 00:15:27.547 "data_size": 63488 00:15:27.547 }, 00:15:27.547 { 00:15:27.547 "name": "BaseBdev3", 00:15:27.547 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:27.547 "is_configured": true, 00:15:27.547 "data_offset": 2048, 00:15:27.547 "data_size": 63488 00:15:27.547 } 00:15:27.547 ] 00:15:27.547 }' 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.547 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.157 [2024-12-12 19:44:10.778356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.157 [2024-12-12 19:44:10.778422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.157 [2024-12-12 19:44:10.778524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.157 [2024-12-12 19:44:10.778652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.157 [2024-12-12 19:44:10.778713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.157 19:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:28.416 /dev/nbd0 00:15:28.416 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.417 1+0 records in 00:15:28.417 1+0 records out 00:15:28.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281123 s, 14.6 MB/s 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.417 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:28.676 /dev/nbd1 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.677 1+0 records in 00:15:28.677 1+0 records out 00:15:28.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413523 s, 9.9 MB/s 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.677 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.937 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.197 [2024-12-12 19:44:11.924570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:29.197 [2024-12-12 19:44:11.924663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.197 [2024-12-12 19:44:11.924698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:29.197 [2024-12-12 19:44:11.924727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.197 [2024-12-12 19:44:11.926859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.197 [2024-12-12 19:44:11.926934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:29.197 [2024-12-12 19:44:11.927047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:29.197 [2024-12-12 19:44:11.927133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.197 [2024-12-12 19:44:11.927312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.197 [2024-12-12 19:44:11.927466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.197 spare 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.197 19:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.197 [2024-12-12 19:44:12.027407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:29.197 [2024-12-12 19:44:12.027470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:29.197 [2024-12-12 19:44:12.027759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:29.197 [2024-12-12 19:44:12.032798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:29.197 [2024-12-12 19:44:12.032851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:29.197 [2024-12-12 19:44:12.033057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.197 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.197 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.457 "name": "raid_bdev1", 00:15:29.457 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:29.457 "strip_size_kb": 64, 00:15:29.457 "state": "online", 00:15:29.457 "raid_level": "raid5f", 00:15:29.457 "superblock": true, 00:15:29.457 "num_base_bdevs": 3, 00:15:29.457 "num_base_bdevs_discovered": 3, 00:15:29.457 "num_base_bdevs_operational": 3, 00:15:29.457 "base_bdevs_list": [ 00:15:29.457 { 00:15:29.457 "name": "spare", 00:15:29.457 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:29.457 "is_configured": true, 00:15:29.457 "data_offset": 2048, 00:15:29.457 "data_size": 63488 00:15:29.457 }, 00:15:29.457 { 00:15:29.457 "name": "BaseBdev2", 00:15:29.457 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:29.457 "is_configured": true, 00:15:29.457 "data_offset": 2048, 00:15:29.457 "data_size": 63488 00:15:29.457 }, 00:15:29.457 { 00:15:29.457 "name": "BaseBdev3", 00:15:29.457 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:29.457 "is_configured": true, 00:15:29.457 "data_offset": 2048, 00:15:29.457 "data_size": 63488 00:15:29.457 } 00:15:29.457 ] 00:15:29.457 }' 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.457 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.717 "name": "raid_bdev1", 00:15:29.717 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:29.717 "strip_size_kb": 64, 00:15:29.717 "state": "online", 00:15:29.717 "raid_level": "raid5f", 00:15:29.717 "superblock": true, 00:15:29.717 "num_base_bdevs": 3, 00:15:29.717 "num_base_bdevs_discovered": 3, 00:15:29.717 "num_base_bdevs_operational": 3, 00:15:29.717 "base_bdevs_list": [ 00:15:29.717 { 00:15:29.717 "name": "spare", 00:15:29.717 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:29.717 "is_configured": true, 00:15:29.717 "data_offset": 2048, 00:15:29.717 "data_size": 63488 00:15:29.717 }, 00:15:29.717 { 00:15:29.717 "name": "BaseBdev2", 00:15:29.717 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:29.717 "is_configured": true, 00:15:29.717 "data_offset": 2048, 00:15:29.717 "data_size": 63488 00:15:29.717 }, 00:15:29.717 { 00:15:29.717 "name": "BaseBdev3", 00:15:29.717 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:29.717 "is_configured": true, 00:15:29.717 "data_offset": 2048, 00:15:29.717 "data_size": 63488 00:15:29.717 } 00:15:29.717 ] 00:15:29.717 }' 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.717 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.977 [2024-12-12 19:44:12.666422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.977 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.978 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.978 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.978 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.978 "name": "raid_bdev1", 00:15:29.978 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:29.978 "strip_size_kb": 64, 00:15:29.978 "state": "online", 00:15:29.978 "raid_level": "raid5f", 00:15:29.978 "superblock": true, 00:15:29.978 "num_base_bdevs": 3, 00:15:29.978 "num_base_bdevs_discovered": 2, 00:15:29.978 "num_base_bdevs_operational": 2, 00:15:29.978 "base_bdevs_list": [ 00:15:29.978 { 00:15:29.978 "name": null, 00:15:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.978 "is_configured": false, 00:15:29.978 "data_offset": 0, 00:15:29.978 "data_size": 63488 00:15:29.978 }, 00:15:29.978 { 00:15:29.978 "name": "BaseBdev2", 00:15:29.978 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:29.978 "is_configured": true, 00:15:29.978 "data_offset": 2048, 00:15:29.978 "data_size": 63488 00:15:29.978 }, 00:15:29.978 { 00:15:29.978 "name": "BaseBdev3", 00:15:29.978 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:29.978 "is_configured": true, 00:15:29.978 "data_offset": 2048, 00:15:29.978 "data_size": 63488 00:15:29.978 } 00:15:29.978 ] 00:15:29.978 }' 00:15:29.978 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.978 19:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 19:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:30.547 19:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.547 19:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 [2024-12-12 19:44:13.121670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.547 [2024-12-12 19:44:13.121888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:30.547 [2024-12-12 19:44:13.121954] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:30.547 [2024-12-12 19:44:13.122025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.547 [2024-12-12 19:44:13.136921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:30.547 19:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.547 19:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:30.547 [2024-12-12 19:44:13.143761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.487 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.487 "name": "raid_bdev1", 00:15:31.487 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:31.487 "strip_size_kb": 64, 00:15:31.487 "state": "online", 00:15:31.487 "raid_level": "raid5f", 00:15:31.487 "superblock": true, 00:15:31.487 "num_base_bdevs": 3, 00:15:31.487 "num_base_bdevs_discovered": 3, 00:15:31.487 "num_base_bdevs_operational": 3, 00:15:31.487 "process": { 00:15:31.487 "type": "rebuild", 00:15:31.487 "target": "spare", 00:15:31.487 "progress": { 00:15:31.487 "blocks": 20480, 00:15:31.487 "percent": 16 00:15:31.487 } 00:15:31.487 }, 00:15:31.487 "base_bdevs_list": [ 00:15:31.487 { 00:15:31.487 "name": "spare", 00:15:31.487 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:31.487 "is_configured": true, 00:15:31.487 "data_offset": 2048, 00:15:31.487 "data_size": 63488 00:15:31.487 }, 00:15:31.487 { 00:15:31.487 "name": "BaseBdev2", 00:15:31.487 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:31.487 "is_configured": true, 00:15:31.487 "data_offset": 2048, 00:15:31.487 "data_size": 63488 00:15:31.487 }, 00:15:31.487 { 00:15:31.487 "name": "BaseBdev3", 00:15:31.488 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:31.488 "is_configured": true, 00:15:31.488 "data_offset": 2048, 00:15:31.488 "data_size": 63488 00:15:31.488 } 00:15:31.488 ] 00:15:31.488 }' 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.488 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.488 [2024-12-12 19:44:14.302582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.749 [2024-12-12 19:44:14.351345] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.749 [2024-12-12 19:44:14.351453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.749 [2024-12-12 19:44:14.351482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.749 [2024-12-12 19:44:14.351491] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.749 "name": "raid_bdev1", 00:15:31.749 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:31.749 "strip_size_kb": 64, 00:15:31.749 "state": "online", 00:15:31.749 "raid_level": "raid5f", 00:15:31.749 "superblock": true, 00:15:31.749 "num_base_bdevs": 3, 00:15:31.749 "num_base_bdevs_discovered": 2, 00:15:31.749 "num_base_bdevs_operational": 2, 00:15:31.749 "base_bdevs_list": [ 00:15:31.749 { 00:15:31.749 "name": null, 00:15:31.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.749 "is_configured": false, 00:15:31.749 "data_offset": 0, 00:15:31.749 "data_size": 63488 00:15:31.749 }, 00:15:31.749 { 00:15:31.749 "name": "BaseBdev2", 00:15:31.749 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:31.749 "is_configured": true, 00:15:31.749 "data_offset": 2048, 00:15:31.749 "data_size": 63488 00:15:31.749 }, 00:15:31.749 { 00:15:31.749 "name": "BaseBdev3", 00:15:31.749 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:31.749 "is_configured": true, 00:15:31.749 "data_offset": 2048, 00:15:31.749 "data_size": 63488 00:15:31.749 } 00:15:31.749 ] 00:15:31.749 }' 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.749 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.009 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.009 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.009 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.009 [2024-12-12 19:44:14.844765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.009 [2024-12-12 19:44:14.844875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.009 [2024-12-12 19:44:14.844913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:32.009 [2024-12-12 19:44:14.844944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.009 [2024-12-12 19:44:14.845480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.009 [2024-12-12 19:44:14.845557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.009 [2024-12-12 19:44:14.845700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:32.009 [2024-12-12 19:44:14.845743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:32.009 [2024-12-12 19:44:14.845795] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:32.009 [2024-12-12 19:44:14.845876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.268 [2024-12-12 19:44:14.861018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:32.268 spare 00:15:32.268 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.268 19:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:32.268 [2024-12-12 19:44:14.867895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.207 "name": "raid_bdev1", 00:15:33.207 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:33.207 "strip_size_kb": 64, 00:15:33.207 "state": "online", 00:15:33.207 "raid_level": "raid5f", 00:15:33.207 "superblock": true, 00:15:33.207 "num_base_bdevs": 3, 00:15:33.207 "num_base_bdevs_discovered": 3, 00:15:33.207 "num_base_bdevs_operational": 3, 00:15:33.207 "process": { 00:15:33.207 "type": "rebuild", 00:15:33.207 "target": "spare", 00:15:33.207 "progress": { 00:15:33.207 "blocks": 20480, 00:15:33.207 "percent": 16 00:15:33.207 } 00:15:33.207 }, 00:15:33.207 "base_bdevs_list": [ 00:15:33.207 { 00:15:33.207 "name": "spare", 00:15:33.207 "uuid": "3d264235-4ccb-5986-acd4-2a28a499a103", 00:15:33.207 "is_configured": true, 00:15:33.207 "data_offset": 2048, 00:15:33.207 "data_size": 63488 00:15:33.207 }, 00:15:33.207 { 00:15:33.207 "name": "BaseBdev2", 00:15:33.207 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:33.207 "is_configured": true, 00:15:33.207 "data_offset": 2048, 00:15:33.207 "data_size": 63488 00:15:33.207 }, 00:15:33.207 { 00:15:33.207 "name": "BaseBdev3", 00:15:33.207 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:33.207 "is_configured": true, 00:15:33.207 "data_offset": 2048, 00:15:33.207 "data_size": 63488 00:15:33.207 } 00:15:33.207 ] 00:15:33.207 }' 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.207 19:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.207 [2024-12-12 19:44:16.006692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.467 [2024-12-12 19:44:16.075489] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:33.467 [2024-12-12 19:44:16.075589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.467 [2024-12-12 19:44:16.075624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.467 [2024-12-12 19:44:16.075643] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.467 "name": "raid_bdev1", 00:15:33.467 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:33.467 "strip_size_kb": 64, 00:15:33.467 "state": "online", 00:15:33.467 "raid_level": "raid5f", 00:15:33.467 "superblock": true, 00:15:33.467 "num_base_bdevs": 3, 00:15:33.467 "num_base_bdevs_discovered": 2, 00:15:33.467 "num_base_bdevs_operational": 2, 00:15:33.467 "base_bdevs_list": [ 00:15:33.467 { 00:15:33.467 "name": null, 00:15:33.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.467 "is_configured": false, 00:15:33.467 "data_offset": 0, 00:15:33.467 "data_size": 63488 00:15:33.467 }, 00:15:33.467 { 00:15:33.467 "name": "BaseBdev2", 00:15:33.467 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:33.467 "is_configured": true, 00:15:33.467 "data_offset": 2048, 00:15:33.467 "data_size": 63488 00:15:33.467 }, 00:15:33.467 { 00:15:33.467 "name": "BaseBdev3", 00:15:33.467 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:33.467 "is_configured": true, 00:15:33.467 "data_offset": 2048, 00:15:33.467 "data_size": 63488 00:15:33.467 } 00:15:33.467 ] 00:15:33.467 }' 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.467 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.727 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.987 "name": "raid_bdev1", 00:15:33.987 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:33.987 "strip_size_kb": 64, 00:15:33.987 "state": "online", 00:15:33.987 "raid_level": "raid5f", 00:15:33.987 "superblock": true, 00:15:33.987 "num_base_bdevs": 3, 00:15:33.987 "num_base_bdevs_discovered": 2, 00:15:33.987 "num_base_bdevs_operational": 2, 00:15:33.987 "base_bdevs_list": [ 00:15:33.987 { 00:15:33.987 "name": null, 00:15:33.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.987 "is_configured": false, 00:15:33.987 "data_offset": 0, 00:15:33.987 "data_size": 63488 00:15:33.987 }, 00:15:33.987 { 00:15:33.987 "name": "BaseBdev2", 00:15:33.987 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:33.987 "is_configured": true, 00:15:33.987 "data_offset": 2048, 00:15:33.987 "data_size": 63488 00:15:33.987 }, 00:15:33.987 { 00:15:33.987 "name": "BaseBdev3", 00:15:33.987 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:33.987 "is_configured": true, 00:15:33.987 "data_offset": 2048, 00:15:33.987 "data_size": 63488 00:15:33.987 } 00:15:33.987 ] 00:15:33.987 }' 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.987 [2024-12-12 19:44:16.719305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.987 [2024-12-12 19:44:16.719359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.987 [2024-12-12 19:44:16.719384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:33.987 [2024-12-12 19:44:16.719393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.987 [2024-12-12 19:44:16.719848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.987 [2024-12-12 19:44:16.719876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.987 [2024-12-12 19:44:16.719972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:33.987 [2024-12-12 19:44:16.719988] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:33.987 [2024-12-12 19:44:16.720007] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:33.987 [2024-12-12 19:44:16.720016] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:33.987 BaseBdev1 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.987 19:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.926 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.927 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.927 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.186 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.186 "name": "raid_bdev1", 00:15:35.186 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:35.186 "strip_size_kb": 64, 00:15:35.186 "state": "online", 00:15:35.186 "raid_level": "raid5f", 00:15:35.186 "superblock": true, 00:15:35.186 "num_base_bdevs": 3, 00:15:35.186 "num_base_bdevs_discovered": 2, 00:15:35.186 "num_base_bdevs_operational": 2, 00:15:35.186 "base_bdevs_list": [ 00:15:35.186 { 00:15:35.186 "name": null, 00:15:35.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.186 "is_configured": false, 00:15:35.186 "data_offset": 0, 00:15:35.186 "data_size": 63488 00:15:35.186 }, 00:15:35.186 { 00:15:35.186 "name": "BaseBdev2", 00:15:35.186 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:35.186 "is_configured": true, 00:15:35.186 "data_offset": 2048, 00:15:35.186 "data_size": 63488 00:15:35.186 }, 00:15:35.186 { 00:15:35.186 "name": "BaseBdev3", 00:15:35.186 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:35.186 "is_configured": true, 00:15:35.186 "data_offset": 2048, 00:15:35.186 "data_size": 63488 00:15:35.186 } 00:15:35.186 ] 00:15:35.186 }' 00:15:35.186 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.186 19:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.446 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.446 "name": "raid_bdev1", 00:15:35.446 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:35.446 "strip_size_kb": 64, 00:15:35.446 "state": "online", 00:15:35.446 "raid_level": "raid5f", 00:15:35.446 "superblock": true, 00:15:35.446 "num_base_bdevs": 3, 00:15:35.446 "num_base_bdevs_discovered": 2, 00:15:35.446 "num_base_bdevs_operational": 2, 00:15:35.446 "base_bdevs_list": [ 00:15:35.446 { 00:15:35.446 "name": null, 00:15:35.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.446 "is_configured": false, 00:15:35.446 "data_offset": 0, 00:15:35.446 "data_size": 63488 00:15:35.446 }, 00:15:35.446 { 00:15:35.446 "name": "BaseBdev2", 00:15:35.446 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:35.447 "is_configured": true, 00:15:35.447 "data_offset": 2048, 00:15:35.447 "data_size": 63488 00:15:35.447 }, 00:15:35.447 { 00:15:35.447 "name": "BaseBdev3", 00:15:35.447 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:35.447 "is_configured": true, 00:15:35.447 "data_offset": 2048, 00:15:35.447 "data_size": 63488 00:15:35.447 } 00:15:35.447 ] 00:15:35.447 }' 00:15:35.447 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.447 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.447 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.706 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.706 [2024-12-12 19:44:18.328662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.707 [2024-12-12 19:44:18.328838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:35.707 [2024-12-12 19:44:18.328895] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:35.707 request: 00:15:35.707 { 00:15:35.707 "base_bdev": "BaseBdev1", 00:15:35.707 "raid_bdev": "raid_bdev1", 00:15:35.707 "method": "bdev_raid_add_base_bdev", 00:15:35.707 "req_id": 1 00:15:35.707 } 00:15:35.707 Got JSON-RPC error response 00:15:35.707 response: 00:15:35.707 { 00:15:35.707 "code": -22, 00:15:35.707 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:35.707 } 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.707 19:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.644 "name": "raid_bdev1", 00:15:36.644 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:36.644 "strip_size_kb": 64, 00:15:36.644 "state": "online", 00:15:36.644 "raid_level": "raid5f", 00:15:36.644 "superblock": true, 00:15:36.644 "num_base_bdevs": 3, 00:15:36.644 "num_base_bdevs_discovered": 2, 00:15:36.644 "num_base_bdevs_operational": 2, 00:15:36.644 "base_bdevs_list": [ 00:15:36.644 { 00:15:36.644 "name": null, 00:15:36.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.644 "is_configured": false, 00:15:36.644 "data_offset": 0, 00:15:36.644 "data_size": 63488 00:15:36.644 }, 00:15:36.644 { 00:15:36.644 "name": "BaseBdev2", 00:15:36.644 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:36.644 "is_configured": true, 00:15:36.644 "data_offset": 2048, 00:15:36.644 "data_size": 63488 00:15:36.644 }, 00:15:36.644 { 00:15:36.644 "name": "BaseBdev3", 00:15:36.644 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:36.644 "is_configured": true, 00:15:36.644 "data_offset": 2048, 00:15:36.644 "data_size": 63488 00:15:36.644 } 00:15:36.644 ] 00:15:36.644 }' 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.644 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.212 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.213 "name": "raid_bdev1", 00:15:37.213 "uuid": "c1634249-a47b-4d6f-aee2-4fc32f86f373", 00:15:37.213 "strip_size_kb": 64, 00:15:37.213 "state": "online", 00:15:37.213 "raid_level": "raid5f", 00:15:37.213 "superblock": true, 00:15:37.213 "num_base_bdevs": 3, 00:15:37.213 "num_base_bdevs_discovered": 2, 00:15:37.213 "num_base_bdevs_operational": 2, 00:15:37.213 "base_bdevs_list": [ 00:15:37.213 { 00:15:37.213 "name": null, 00:15:37.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.213 "is_configured": false, 00:15:37.213 "data_offset": 0, 00:15:37.213 "data_size": 63488 00:15:37.213 }, 00:15:37.213 { 00:15:37.213 "name": "BaseBdev2", 00:15:37.213 "uuid": "6852a76c-fbab-548b-90b8-ce46c65708bf", 00:15:37.213 "is_configured": true, 00:15:37.213 "data_offset": 2048, 00:15:37.213 "data_size": 63488 00:15:37.213 }, 00:15:37.213 { 00:15:37.213 "name": "BaseBdev3", 00:15:37.213 "uuid": "1877003b-397d-53aa-90ea-59be1bc99f31", 00:15:37.213 "is_configured": true, 00:15:37.213 "data_offset": 2048, 00:15:37.213 "data_size": 63488 00:15:37.213 } 00:15:37.213 ] 00:15:37.213 }' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83695 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83695 ']' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 83695 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83695 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.213 killing process with pid 83695 00:15:37.213 Received shutdown signal, test time was about 60.000000 seconds 00:15:37.213 00:15:37.213 Latency(us) 00:15:37.213 [2024-12-12T19:44:20.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.213 [2024-12-12T19:44:20.058Z] =================================================================================================================== 00:15:37.213 [2024-12-12T19:44:20.058Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83695' 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 83695 00:15:37.213 [2024-12-12 19:44:19.957633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.213 [2024-12-12 19:44:19.957741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.213 [2024-12-12 19:44:19.957796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.213 [2024-12-12 19:44:19.957807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:37.213 19:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 83695 00:15:37.783 [2024-12-12 19:44:20.324393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.722 19:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.722 00:15:38.722 real 0m22.950s 00:15:38.722 user 0m29.365s 00:15:38.722 sys 0m2.700s 00:15:38.722 19:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.722 19:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.722 ************************************ 00:15:38.722 END TEST raid5f_rebuild_test_sb 00:15:38.722 ************************************ 00:15:38.722 19:44:21 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:38.722 19:44:21 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:38.722 19:44:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:38.722 19:44:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.722 19:44:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.722 ************************************ 00:15:38.722 START TEST raid5f_state_function_test 00:15:38.722 ************************************ 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84442 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:38.722 Process raid pid: 84442 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84442' 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84442 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84442 ']' 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.722 19:44:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.722 [2024-12-12 19:44:21.532894] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:38.722 [2024-12-12 19:44:21.533097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.982 [2024-12-12 19:44:21.706051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.982 [2024-12-12 19:44:21.811735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.242 [2024-12-12 19:44:22.008886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.242 [2024-12-12 19:44:22.008923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.811 [2024-12-12 19:44:22.367364] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.811 [2024-12-12 19:44:22.367455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.811 [2024-12-12 19:44:22.367504] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.811 [2024-12-12 19:44:22.367554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.811 [2024-12-12 19:44:22.367583] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.811 [2024-12-12 19:44:22.367605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.811 [2024-12-12 19:44:22.367683] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.811 [2024-12-12 19:44:22.367705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.811 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.812 "name": "Existed_Raid", 00:15:39.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.812 "strip_size_kb": 64, 00:15:39.812 "state": "configuring", 00:15:39.812 "raid_level": "raid5f", 00:15:39.812 "superblock": false, 00:15:39.812 "num_base_bdevs": 4, 00:15:39.812 "num_base_bdevs_discovered": 0, 00:15:39.812 "num_base_bdevs_operational": 4, 00:15:39.812 "base_bdevs_list": [ 00:15:39.812 { 00:15:39.812 "name": "BaseBdev1", 00:15:39.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.812 "is_configured": false, 00:15:39.812 "data_offset": 0, 00:15:39.812 "data_size": 0 00:15:39.812 }, 00:15:39.812 { 00:15:39.812 "name": "BaseBdev2", 00:15:39.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.812 "is_configured": false, 00:15:39.812 "data_offset": 0, 00:15:39.812 "data_size": 0 00:15:39.812 }, 00:15:39.812 { 00:15:39.812 "name": "BaseBdev3", 00:15:39.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.812 "is_configured": false, 00:15:39.812 "data_offset": 0, 00:15:39.812 "data_size": 0 00:15:39.812 }, 00:15:39.812 { 00:15:39.812 "name": "BaseBdev4", 00:15:39.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.812 "is_configured": false, 00:15:39.812 "data_offset": 0, 00:15:39.812 "data_size": 0 00:15:39.812 } 00:15:39.812 ] 00:15:39.812 }' 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.812 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 [2024-12-12 19:44:22.794555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.072 [2024-12-12 19:44:22.794624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 [2024-12-12 19:44:22.806538] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.072 [2024-12-12 19:44:22.806621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.072 [2024-12-12 19:44:22.806648] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.072 [2024-12-12 19:44:22.806670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.072 [2024-12-12 19:44:22.806687] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.072 [2024-12-12 19:44:22.806706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.072 [2024-12-12 19:44:22.806723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:40.072 [2024-12-12 19:44:22.806743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 [2024-12-12 19:44:22.851371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.072 BaseBdev1 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.072 [ 00:15:40.072 { 00:15:40.072 "name": "BaseBdev1", 00:15:40.072 "aliases": [ 00:15:40.072 "aaf1f821-ad76-43a8-9edb-569e82e1a593" 00:15:40.072 ], 00:15:40.072 "product_name": "Malloc disk", 00:15:40.072 "block_size": 512, 00:15:40.072 "num_blocks": 65536, 00:15:40.072 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:40.072 "assigned_rate_limits": { 00:15:40.072 "rw_ios_per_sec": 0, 00:15:40.072 "rw_mbytes_per_sec": 0, 00:15:40.072 "r_mbytes_per_sec": 0, 00:15:40.072 "w_mbytes_per_sec": 0 00:15:40.072 }, 00:15:40.072 "claimed": true, 00:15:40.072 "claim_type": "exclusive_write", 00:15:40.072 "zoned": false, 00:15:40.072 "supported_io_types": { 00:15:40.072 "read": true, 00:15:40.072 "write": true, 00:15:40.072 "unmap": true, 00:15:40.072 "flush": true, 00:15:40.072 "reset": true, 00:15:40.072 "nvme_admin": false, 00:15:40.072 "nvme_io": false, 00:15:40.072 "nvme_io_md": false, 00:15:40.072 "write_zeroes": true, 00:15:40.072 "zcopy": true, 00:15:40.072 "get_zone_info": false, 00:15:40.072 "zone_management": false, 00:15:40.072 "zone_append": false, 00:15:40.072 "compare": false, 00:15:40.072 "compare_and_write": false, 00:15:40.072 "abort": true, 00:15:40.072 "seek_hole": false, 00:15:40.072 "seek_data": false, 00:15:40.072 "copy": true, 00:15:40.072 "nvme_iov_md": false 00:15:40.072 }, 00:15:40.072 "memory_domains": [ 00:15:40.072 { 00:15:40.072 "dma_device_id": "system", 00:15:40.072 "dma_device_type": 1 00:15:40.072 }, 00:15:40.072 { 00:15:40.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.072 "dma_device_type": 2 00:15:40.072 } 00:15:40.072 ], 00:15:40.072 "driver_specific": {} 00:15:40.072 } 00:15:40.072 ] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.072 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.073 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.073 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.332 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.332 "name": "Existed_Raid", 00:15:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.332 "strip_size_kb": 64, 00:15:40.332 "state": "configuring", 00:15:40.332 "raid_level": "raid5f", 00:15:40.332 "superblock": false, 00:15:40.332 "num_base_bdevs": 4, 00:15:40.332 "num_base_bdevs_discovered": 1, 00:15:40.332 "num_base_bdevs_operational": 4, 00:15:40.332 "base_bdevs_list": [ 00:15:40.332 { 00:15:40.332 "name": "BaseBdev1", 00:15:40.332 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:40.332 "is_configured": true, 00:15:40.332 "data_offset": 0, 00:15:40.332 "data_size": 65536 00:15:40.332 }, 00:15:40.332 { 00:15:40.332 "name": "BaseBdev2", 00:15:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.332 "is_configured": false, 00:15:40.332 "data_offset": 0, 00:15:40.332 "data_size": 0 00:15:40.332 }, 00:15:40.332 { 00:15:40.332 "name": "BaseBdev3", 00:15:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.332 "is_configured": false, 00:15:40.332 "data_offset": 0, 00:15:40.332 "data_size": 0 00:15:40.332 }, 00:15:40.332 { 00:15:40.332 "name": "BaseBdev4", 00:15:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.332 "is_configured": false, 00:15:40.332 "data_offset": 0, 00:15:40.332 "data_size": 0 00:15:40.332 } 00:15:40.332 ] 00:15:40.332 }' 00:15:40.332 19:44:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.332 19:44:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 [2024-12-12 19:44:23.310604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.592 [2024-12-12 19:44:23.310681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 [2024-12-12 19:44:23.322641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.592 [2024-12-12 19:44:23.324355] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.592 [2024-12-12 19:44:23.324429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.592 [2024-12-12 19:44:23.324456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.592 [2024-12-12 19:44:23.324479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.592 [2024-12-12 19:44:23.324497] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:40.592 [2024-12-12 19:44:23.324517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.592 "name": "Existed_Raid", 00:15:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.592 "strip_size_kb": 64, 00:15:40.592 "state": "configuring", 00:15:40.592 "raid_level": "raid5f", 00:15:40.592 "superblock": false, 00:15:40.592 "num_base_bdevs": 4, 00:15:40.592 "num_base_bdevs_discovered": 1, 00:15:40.592 "num_base_bdevs_operational": 4, 00:15:40.592 "base_bdevs_list": [ 00:15:40.592 { 00:15:40.592 "name": "BaseBdev1", 00:15:40.592 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 65536 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev2", 00:15:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.592 "is_configured": false, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 0 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev3", 00:15:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.592 "is_configured": false, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 0 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev4", 00:15:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.592 "is_configured": false, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 0 00:15:40.592 } 00:15:40.592 ] 00:15:40.592 }' 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.592 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.162 [2024-12-12 19:44:23.787896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.162 BaseBdev2 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.162 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.162 [ 00:15:41.162 { 00:15:41.162 "name": "BaseBdev2", 00:15:41.162 "aliases": [ 00:15:41.162 "008a0f07-e710-402e-aa73-f69faa5ace54" 00:15:41.162 ], 00:15:41.162 "product_name": "Malloc disk", 00:15:41.162 "block_size": 512, 00:15:41.162 "num_blocks": 65536, 00:15:41.162 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:41.162 "assigned_rate_limits": { 00:15:41.162 "rw_ios_per_sec": 0, 00:15:41.162 "rw_mbytes_per_sec": 0, 00:15:41.162 "r_mbytes_per_sec": 0, 00:15:41.163 "w_mbytes_per_sec": 0 00:15:41.163 }, 00:15:41.163 "claimed": true, 00:15:41.163 "claim_type": "exclusive_write", 00:15:41.163 "zoned": false, 00:15:41.163 "supported_io_types": { 00:15:41.163 "read": true, 00:15:41.163 "write": true, 00:15:41.163 "unmap": true, 00:15:41.163 "flush": true, 00:15:41.163 "reset": true, 00:15:41.163 "nvme_admin": false, 00:15:41.163 "nvme_io": false, 00:15:41.163 "nvme_io_md": false, 00:15:41.163 "write_zeroes": true, 00:15:41.163 "zcopy": true, 00:15:41.163 "get_zone_info": false, 00:15:41.163 "zone_management": false, 00:15:41.163 "zone_append": false, 00:15:41.163 "compare": false, 00:15:41.163 "compare_and_write": false, 00:15:41.163 "abort": true, 00:15:41.163 "seek_hole": false, 00:15:41.163 "seek_data": false, 00:15:41.163 "copy": true, 00:15:41.163 "nvme_iov_md": false 00:15:41.163 }, 00:15:41.163 "memory_domains": [ 00:15:41.163 { 00:15:41.163 "dma_device_id": "system", 00:15:41.163 "dma_device_type": 1 00:15:41.163 }, 00:15:41.163 { 00:15:41.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.163 "dma_device_type": 2 00:15:41.163 } 00:15:41.163 ], 00:15:41.163 "driver_specific": {} 00:15:41.163 } 00:15:41.163 ] 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.163 "name": "Existed_Raid", 00:15:41.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.163 "strip_size_kb": 64, 00:15:41.163 "state": "configuring", 00:15:41.163 "raid_level": "raid5f", 00:15:41.163 "superblock": false, 00:15:41.163 "num_base_bdevs": 4, 00:15:41.163 "num_base_bdevs_discovered": 2, 00:15:41.163 "num_base_bdevs_operational": 4, 00:15:41.163 "base_bdevs_list": [ 00:15:41.163 { 00:15:41.163 "name": "BaseBdev1", 00:15:41.163 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:41.163 "is_configured": true, 00:15:41.163 "data_offset": 0, 00:15:41.163 "data_size": 65536 00:15:41.163 }, 00:15:41.163 { 00:15:41.163 "name": "BaseBdev2", 00:15:41.163 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:41.163 "is_configured": true, 00:15:41.163 "data_offset": 0, 00:15:41.163 "data_size": 65536 00:15:41.163 }, 00:15:41.163 { 00:15:41.163 "name": "BaseBdev3", 00:15:41.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.163 "is_configured": false, 00:15:41.163 "data_offset": 0, 00:15:41.163 "data_size": 0 00:15:41.163 }, 00:15:41.163 { 00:15:41.163 "name": "BaseBdev4", 00:15:41.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.163 "is_configured": false, 00:15:41.163 "data_offset": 0, 00:15:41.163 "data_size": 0 00:15:41.163 } 00:15:41.163 ] 00:15:41.163 }' 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.163 19:44:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.423 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.423 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.423 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.683 [2024-12-12 19:44:24.313147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.683 BaseBdev3 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.683 [ 00:15:41.683 { 00:15:41.683 "name": "BaseBdev3", 00:15:41.683 "aliases": [ 00:15:41.683 "55aabc33-b95b-4672-a550-07cf536dae06" 00:15:41.683 ], 00:15:41.683 "product_name": "Malloc disk", 00:15:41.683 "block_size": 512, 00:15:41.683 "num_blocks": 65536, 00:15:41.683 "uuid": "55aabc33-b95b-4672-a550-07cf536dae06", 00:15:41.683 "assigned_rate_limits": { 00:15:41.683 "rw_ios_per_sec": 0, 00:15:41.683 "rw_mbytes_per_sec": 0, 00:15:41.683 "r_mbytes_per_sec": 0, 00:15:41.683 "w_mbytes_per_sec": 0 00:15:41.683 }, 00:15:41.683 "claimed": true, 00:15:41.683 "claim_type": "exclusive_write", 00:15:41.683 "zoned": false, 00:15:41.683 "supported_io_types": { 00:15:41.683 "read": true, 00:15:41.683 "write": true, 00:15:41.683 "unmap": true, 00:15:41.683 "flush": true, 00:15:41.683 "reset": true, 00:15:41.683 "nvme_admin": false, 00:15:41.683 "nvme_io": false, 00:15:41.683 "nvme_io_md": false, 00:15:41.683 "write_zeroes": true, 00:15:41.683 "zcopy": true, 00:15:41.683 "get_zone_info": false, 00:15:41.683 "zone_management": false, 00:15:41.683 "zone_append": false, 00:15:41.683 "compare": false, 00:15:41.683 "compare_and_write": false, 00:15:41.683 "abort": true, 00:15:41.683 "seek_hole": false, 00:15:41.683 "seek_data": false, 00:15:41.683 "copy": true, 00:15:41.683 "nvme_iov_md": false 00:15:41.683 }, 00:15:41.683 "memory_domains": [ 00:15:41.683 { 00:15:41.683 "dma_device_id": "system", 00:15:41.683 "dma_device_type": 1 00:15:41.683 }, 00:15:41.683 { 00:15:41.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.683 "dma_device_type": 2 00:15:41.683 } 00:15:41.683 ], 00:15:41.683 "driver_specific": {} 00:15:41.683 } 00:15:41.683 ] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.683 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.683 "name": "Existed_Raid", 00:15:41.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.683 "strip_size_kb": 64, 00:15:41.683 "state": "configuring", 00:15:41.683 "raid_level": "raid5f", 00:15:41.683 "superblock": false, 00:15:41.683 "num_base_bdevs": 4, 00:15:41.683 "num_base_bdevs_discovered": 3, 00:15:41.683 "num_base_bdevs_operational": 4, 00:15:41.683 "base_bdevs_list": [ 00:15:41.683 { 00:15:41.683 "name": "BaseBdev1", 00:15:41.683 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:41.683 "is_configured": true, 00:15:41.683 "data_offset": 0, 00:15:41.683 "data_size": 65536 00:15:41.683 }, 00:15:41.683 { 00:15:41.683 "name": "BaseBdev2", 00:15:41.683 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:41.683 "is_configured": true, 00:15:41.683 "data_offset": 0, 00:15:41.683 "data_size": 65536 00:15:41.683 }, 00:15:41.683 { 00:15:41.683 "name": "BaseBdev3", 00:15:41.683 "uuid": "55aabc33-b95b-4672-a550-07cf536dae06", 00:15:41.683 "is_configured": true, 00:15:41.683 "data_offset": 0, 00:15:41.683 "data_size": 65536 00:15:41.683 }, 00:15:41.683 { 00:15:41.683 "name": "BaseBdev4", 00:15:41.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.684 "is_configured": false, 00:15:41.684 "data_offset": 0, 00:15:41.684 "data_size": 0 00:15:41.684 } 00:15:41.684 ] 00:15:41.684 }' 00:15:41.684 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.684 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.943 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.943 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.943 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.203 [2024-12-12 19:44:24.820447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:42.203 [2024-12-12 19:44:24.820586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:42.203 [2024-12-12 19:44:24.820615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:42.203 [2024-12-12 19:44:24.820905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:42.203 [2024-12-12 19:44:24.827467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:42.203 [2024-12-12 19:44:24.827525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:42.203 [2024-12-12 19:44:24.827855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.203 BaseBdev4 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.203 [ 00:15:42.203 { 00:15:42.203 "name": "BaseBdev4", 00:15:42.203 "aliases": [ 00:15:42.203 "38634f22-6b50-40cb-bf13-635d270a7408" 00:15:42.203 ], 00:15:42.203 "product_name": "Malloc disk", 00:15:42.203 "block_size": 512, 00:15:42.203 "num_blocks": 65536, 00:15:42.203 "uuid": "38634f22-6b50-40cb-bf13-635d270a7408", 00:15:42.203 "assigned_rate_limits": { 00:15:42.203 "rw_ios_per_sec": 0, 00:15:42.203 "rw_mbytes_per_sec": 0, 00:15:42.203 "r_mbytes_per_sec": 0, 00:15:42.203 "w_mbytes_per_sec": 0 00:15:42.203 }, 00:15:42.203 "claimed": true, 00:15:42.203 "claim_type": "exclusive_write", 00:15:42.203 "zoned": false, 00:15:42.203 "supported_io_types": { 00:15:42.203 "read": true, 00:15:42.203 "write": true, 00:15:42.203 "unmap": true, 00:15:42.203 "flush": true, 00:15:42.203 "reset": true, 00:15:42.203 "nvme_admin": false, 00:15:42.203 "nvme_io": false, 00:15:42.203 "nvme_io_md": false, 00:15:42.203 "write_zeroes": true, 00:15:42.203 "zcopy": true, 00:15:42.203 "get_zone_info": false, 00:15:42.203 "zone_management": false, 00:15:42.203 "zone_append": false, 00:15:42.203 "compare": false, 00:15:42.203 "compare_and_write": false, 00:15:42.203 "abort": true, 00:15:42.203 "seek_hole": false, 00:15:42.203 "seek_data": false, 00:15:42.203 "copy": true, 00:15:42.203 "nvme_iov_md": false 00:15:42.203 }, 00:15:42.203 "memory_domains": [ 00:15:42.203 { 00:15:42.203 "dma_device_id": "system", 00:15:42.203 "dma_device_type": 1 00:15:42.203 }, 00:15:42.203 { 00:15:42.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.203 "dma_device_type": 2 00:15:42.203 } 00:15:42.203 ], 00:15:42.203 "driver_specific": {} 00:15:42.203 } 00:15:42.203 ] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.203 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.203 "name": "Existed_Raid", 00:15:42.203 "uuid": "b6aa8bf6-6bbc-457b-857b-c15c08c4ffc8", 00:15:42.203 "strip_size_kb": 64, 00:15:42.203 "state": "online", 00:15:42.203 "raid_level": "raid5f", 00:15:42.203 "superblock": false, 00:15:42.203 "num_base_bdevs": 4, 00:15:42.203 "num_base_bdevs_discovered": 4, 00:15:42.203 "num_base_bdevs_operational": 4, 00:15:42.203 "base_bdevs_list": [ 00:15:42.203 { 00:15:42.203 "name": "BaseBdev1", 00:15:42.203 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:42.203 "is_configured": true, 00:15:42.203 "data_offset": 0, 00:15:42.203 "data_size": 65536 00:15:42.203 }, 00:15:42.203 { 00:15:42.203 "name": "BaseBdev2", 00:15:42.203 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:42.203 "is_configured": true, 00:15:42.203 "data_offset": 0, 00:15:42.203 "data_size": 65536 00:15:42.203 }, 00:15:42.203 { 00:15:42.203 "name": "BaseBdev3", 00:15:42.203 "uuid": "55aabc33-b95b-4672-a550-07cf536dae06", 00:15:42.203 "is_configured": true, 00:15:42.203 "data_offset": 0, 00:15:42.203 "data_size": 65536 00:15:42.203 }, 00:15:42.203 { 00:15:42.203 "name": "BaseBdev4", 00:15:42.203 "uuid": "38634f22-6b50-40cb-bf13-635d270a7408", 00:15:42.203 "is_configured": true, 00:15:42.203 "data_offset": 0, 00:15:42.203 "data_size": 65536 00:15:42.203 } 00:15:42.203 ] 00:15:42.203 }' 00:15:42.204 19:44:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.204 19:44:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 [2024-12-12 19:44:25.362625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.771 "name": "Existed_Raid", 00:15:42.771 "aliases": [ 00:15:42.771 "b6aa8bf6-6bbc-457b-857b-c15c08c4ffc8" 00:15:42.771 ], 00:15:42.771 "product_name": "Raid Volume", 00:15:42.771 "block_size": 512, 00:15:42.771 "num_blocks": 196608, 00:15:42.771 "uuid": "b6aa8bf6-6bbc-457b-857b-c15c08c4ffc8", 00:15:42.771 "assigned_rate_limits": { 00:15:42.771 "rw_ios_per_sec": 0, 00:15:42.771 "rw_mbytes_per_sec": 0, 00:15:42.771 "r_mbytes_per_sec": 0, 00:15:42.771 "w_mbytes_per_sec": 0 00:15:42.771 }, 00:15:42.771 "claimed": false, 00:15:42.771 "zoned": false, 00:15:42.771 "supported_io_types": { 00:15:42.771 "read": true, 00:15:42.771 "write": true, 00:15:42.771 "unmap": false, 00:15:42.771 "flush": false, 00:15:42.771 "reset": true, 00:15:42.771 "nvme_admin": false, 00:15:42.771 "nvme_io": false, 00:15:42.771 "nvme_io_md": false, 00:15:42.771 "write_zeroes": true, 00:15:42.771 "zcopy": false, 00:15:42.771 "get_zone_info": false, 00:15:42.771 "zone_management": false, 00:15:42.771 "zone_append": false, 00:15:42.771 "compare": false, 00:15:42.771 "compare_and_write": false, 00:15:42.771 "abort": false, 00:15:42.771 "seek_hole": false, 00:15:42.771 "seek_data": false, 00:15:42.771 "copy": false, 00:15:42.771 "nvme_iov_md": false 00:15:42.771 }, 00:15:42.771 "driver_specific": { 00:15:42.771 "raid": { 00:15:42.771 "uuid": "b6aa8bf6-6bbc-457b-857b-c15c08c4ffc8", 00:15:42.771 "strip_size_kb": 64, 00:15:42.771 "state": "online", 00:15:42.771 "raid_level": "raid5f", 00:15:42.771 "superblock": false, 00:15:42.771 "num_base_bdevs": 4, 00:15:42.771 "num_base_bdevs_discovered": 4, 00:15:42.771 "num_base_bdevs_operational": 4, 00:15:42.771 "base_bdevs_list": [ 00:15:42.771 { 00:15:42.771 "name": "BaseBdev1", 00:15:42.771 "uuid": "aaf1f821-ad76-43a8-9edb-569e82e1a593", 00:15:42.771 "is_configured": true, 00:15:42.771 "data_offset": 0, 00:15:42.771 "data_size": 65536 00:15:42.771 }, 00:15:42.771 { 00:15:42.771 "name": "BaseBdev2", 00:15:42.771 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:42.771 "is_configured": true, 00:15:42.771 "data_offset": 0, 00:15:42.771 "data_size": 65536 00:15:42.771 }, 00:15:42.771 { 00:15:42.771 "name": "BaseBdev3", 00:15:42.771 "uuid": "55aabc33-b95b-4672-a550-07cf536dae06", 00:15:42.771 "is_configured": true, 00:15:42.771 "data_offset": 0, 00:15:42.771 "data_size": 65536 00:15:42.771 }, 00:15:42.771 { 00:15:42.771 "name": "BaseBdev4", 00:15:42.771 "uuid": "38634f22-6b50-40cb-bf13-635d270a7408", 00:15:42.771 "is_configured": true, 00:15:42.771 "data_offset": 0, 00:15:42.771 "data_size": 65536 00:15:42.771 } 00:15:42.771 ] 00:15:42.771 } 00:15:42.771 } 00:15:42.771 }' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:42.771 BaseBdev2 00:15:42.771 BaseBdev3 00:15:42.771 BaseBdev4' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.771 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.031 [2024-12-12 19:44:25.690397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.031 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.031 "name": "Existed_Raid", 00:15:43.031 "uuid": "b6aa8bf6-6bbc-457b-857b-c15c08c4ffc8", 00:15:43.031 "strip_size_kb": 64, 00:15:43.031 "state": "online", 00:15:43.031 "raid_level": "raid5f", 00:15:43.031 "superblock": false, 00:15:43.031 "num_base_bdevs": 4, 00:15:43.031 "num_base_bdevs_discovered": 3, 00:15:43.031 "num_base_bdevs_operational": 3, 00:15:43.031 "base_bdevs_list": [ 00:15:43.031 { 00:15:43.031 "name": null, 00:15:43.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.031 "is_configured": false, 00:15:43.031 "data_offset": 0, 00:15:43.031 "data_size": 65536 00:15:43.031 }, 00:15:43.031 { 00:15:43.031 "name": "BaseBdev2", 00:15:43.031 "uuid": "008a0f07-e710-402e-aa73-f69faa5ace54", 00:15:43.032 "is_configured": true, 00:15:43.032 "data_offset": 0, 00:15:43.032 "data_size": 65536 00:15:43.032 }, 00:15:43.032 { 00:15:43.032 "name": "BaseBdev3", 00:15:43.032 "uuid": "55aabc33-b95b-4672-a550-07cf536dae06", 00:15:43.032 "is_configured": true, 00:15:43.032 "data_offset": 0, 00:15:43.032 "data_size": 65536 00:15:43.032 }, 00:15:43.032 { 00:15:43.032 "name": "BaseBdev4", 00:15:43.032 "uuid": "38634f22-6b50-40cb-bf13-635d270a7408", 00:15:43.032 "is_configured": true, 00:15:43.032 "data_offset": 0, 00:15:43.032 "data_size": 65536 00:15:43.032 } 00:15:43.032 ] 00:15:43.032 }' 00:15:43.032 19:44:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.032 19:44:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 [2024-12-12 19:44:26.231690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.601 [2024-12-12 19:44:26.231824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.601 [2024-12-12 19:44:26.319128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.601 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 [2024-12-12 19:44:26.379037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.862 [2024-12-12 19:44:26.524860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:43.862 [2024-12-12 19:44:26.524946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.862 BaseBdev2 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.862 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 [ 00:15:44.123 { 00:15:44.123 "name": "BaseBdev2", 00:15:44.123 "aliases": [ 00:15:44.123 "631be27f-51db-42de-b0e8-4cfbf78551e6" 00:15:44.123 ], 00:15:44.123 "product_name": "Malloc disk", 00:15:44.123 "block_size": 512, 00:15:44.123 "num_blocks": 65536, 00:15:44.123 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:44.123 "assigned_rate_limits": { 00:15:44.123 "rw_ios_per_sec": 0, 00:15:44.123 "rw_mbytes_per_sec": 0, 00:15:44.123 "r_mbytes_per_sec": 0, 00:15:44.123 "w_mbytes_per_sec": 0 00:15:44.123 }, 00:15:44.123 "claimed": false, 00:15:44.123 "zoned": false, 00:15:44.123 "supported_io_types": { 00:15:44.123 "read": true, 00:15:44.123 "write": true, 00:15:44.123 "unmap": true, 00:15:44.123 "flush": true, 00:15:44.123 "reset": true, 00:15:44.123 "nvme_admin": false, 00:15:44.123 "nvme_io": false, 00:15:44.123 "nvme_io_md": false, 00:15:44.123 "write_zeroes": true, 00:15:44.123 "zcopy": true, 00:15:44.123 "get_zone_info": false, 00:15:44.123 "zone_management": false, 00:15:44.123 "zone_append": false, 00:15:44.123 "compare": false, 00:15:44.123 "compare_and_write": false, 00:15:44.123 "abort": true, 00:15:44.123 "seek_hole": false, 00:15:44.123 "seek_data": false, 00:15:44.123 "copy": true, 00:15:44.123 "nvme_iov_md": false 00:15:44.123 }, 00:15:44.123 "memory_domains": [ 00:15:44.123 { 00:15:44.123 "dma_device_id": "system", 00:15:44.123 "dma_device_type": 1 00:15:44.123 }, 00:15:44.123 { 00:15:44.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.123 "dma_device_type": 2 00:15:44.123 } 00:15:44.123 ], 00:15:44.123 "driver_specific": {} 00:15:44.123 } 00:15:44.123 ] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 BaseBdev3 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 [ 00:15:44.123 { 00:15:44.123 "name": "BaseBdev3", 00:15:44.123 "aliases": [ 00:15:44.123 "31808590-d938-4cf5-bc8c-80365f59775a" 00:15:44.123 ], 00:15:44.123 "product_name": "Malloc disk", 00:15:44.123 "block_size": 512, 00:15:44.123 "num_blocks": 65536, 00:15:44.123 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:44.123 "assigned_rate_limits": { 00:15:44.123 "rw_ios_per_sec": 0, 00:15:44.123 "rw_mbytes_per_sec": 0, 00:15:44.123 "r_mbytes_per_sec": 0, 00:15:44.123 "w_mbytes_per_sec": 0 00:15:44.123 }, 00:15:44.123 "claimed": false, 00:15:44.123 "zoned": false, 00:15:44.123 "supported_io_types": { 00:15:44.123 "read": true, 00:15:44.123 "write": true, 00:15:44.123 "unmap": true, 00:15:44.123 "flush": true, 00:15:44.123 "reset": true, 00:15:44.123 "nvme_admin": false, 00:15:44.123 "nvme_io": false, 00:15:44.123 "nvme_io_md": false, 00:15:44.123 "write_zeroes": true, 00:15:44.123 "zcopy": true, 00:15:44.123 "get_zone_info": false, 00:15:44.123 "zone_management": false, 00:15:44.123 "zone_append": false, 00:15:44.123 "compare": false, 00:15:44.123 "compare_and_write": false, 00:15:44.123 "abort": true, 00:15:44.123 "seek_hole": false, 00:15:44.123 "seek_data": false, 00:15:44.123 "copy": true, 00:15:44.123 "nvme_iov_md": false 00:15:44.123 }, 00:15:44.123 "memory_domains": [ 00:15:44.123 { 00:15:44.123 "dma_device_id": "system", 00:15:44.123 "dma_device_type": 1 00:15:44.123 }, 00:15:44.123 { 00:15:44.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.123 "dma_device_type": 2 00:15:44.123 } 00:15:44.123 ], 00:15:44.123 "driver_specific": {} 00:15:44.123 } 00:15:44.123 ] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 BaseBdev4 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.123 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.123 [ 00:15:44.123 { 00:15:44.123 "name": "BaseBdev4", 00:15:44.123 "aliases": [ 00:15:44.124 "572e1a69-52d9-486d-be5a-fe7f89ff52c1" 00:15:44.124 ], 00:15:44.124 "product_name": "Malloc disk", 00:15:44.124 "block_size": 512, 00:15:44.124 "num_blocks": 65536, 00:15:44.124 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:44.124 "assigned_rate_limits": { 00:15:44.124 "rw_ios_per_sec": 0, 00:15:44.124 "rw_mbytes_per_sec": 0, 00:15:44.124 "r_mbytes_per_sec": 0, 00:15:44.124 "w_mbytes_per_sec": 0 00:15:44.124 }, 00:15:44.124 "claimed": false, 00:15:44.124 "zoned": false, 00:15:44.124 "supported_io_types": { 00:15:44.124 "read": true, 00:15:44.124 "write": true, 00:15:44.124 "unmap": true, 00:15:44.124 "flush": true, 00:15:44.124 "reset": true, 00:15:44.124 "nvme_admin": false, 00:15:44.124 "nvme_io": false, 00:15:44.124 "nvme_io_md": false, 00:15:44.124 "write_zeroes": true, 00:15:44.124 "zcopy": true, 00:15:44.124 "get_zone_info": false, 00:15:44.124 "zone_management": false, 00:15:44.124 "zone_append": false, 00:15:44.124 "compare": false, 00:15:44.124 "compare_and_write": false, 00:15:44.124 "abort": true, 00:15:44.124 "seek_hole": false, 00:15:44.124 "seek_data": false, 00:15:44.124 "copy": true, 00:15:44.124 "nvme_iov_md": false 00:15:44.124 }, 00:15:44.124 "memory_domains": [ 00:15:44.124 { 00:15:44.124 "dma_device_id": "system", 00:15:44.124 "dma_device_type": 1 00:15:44.124 }, 00:15:44.124 { 00:15:44.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.124 "dma_device_type": 2 00:15:44.124 } 00:15:44.124 ], 00:15:44.124 "driver_specific": {} 00:15:44.124 } 00:15:44.124 ] 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.124 [2024-12-12 19:44:26.892429] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.124 [2024-12-12 19:44:26.892505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.124 [2024-12-12 19:44:26.892550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.124 [2024-12-12 19:44:26.894295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.124 [2024-12-12 19:44:26.894394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.124 "name": "Existed_Raid", 00:15:44.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.124 "strip_size_kb": 64, 00:15:44.124 "state": "configuring", 00:15:44.124 "raid_level": "raid5f", 00:15:44.124 "superblock": false, 00:15:44.124 "num_base_bdevs": 4, 00:15:44.124 "num_base_bdevs_discovered": 3, 00:15:44.124 "num_base_bdevs_operational": 4, 00:15:44.124 "base_bdevs_list": [ 00:15:44.124 { 00:15:44.124 "name": "BaseBdev1", 00:15:44.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.124 "is_configured": false, 00:15:44.124 "data_offset": 0, 00:15:44.124 "data_size": 0 00:15:44.124 }, 00:15:44.124 { 00:15:44.124 "name": "BaseBdev2", 00:15:44.124 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:44.124 "is_configured": true, 00:15:44.124 "data_offset": 0, 00:15:44.124 "data_size": 65536 00:15:44.124 }, 00:15:44.124 { 00:15:44.124 "name": "BaseBdev3", 00:15:44.124 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:44.124 "is_configured": true, 00:15:44.124 "data_offset": 0, 00:15:44.124 "data_size": 65536 00:15:44.124 }, 00:15:44.124 { 00:15:44.124 "name": "BaseBdev4", 00:15:44.124 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:44.124 "is_configured": true, 00:15:44.124 "data_offset": 0, 00:15:44.124 "data_size": 65536 00:15:44.124 } 00:15:44.124 ] 00:15:44.124 }' 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.124 19:44:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.693 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 [2024-12-12 19:44:27.351661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.694 "name": "Existed_Raid", 00:15:44.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.694 "strip_size_kb": 64, 00:15:44.694 "state": "configuring", 00:15:44.694 "raid_level": "raid5f", 00:15:44.694 "superblock": false, 00:15:44.694 "num_base_bdevs": 4, 00:15:44.694 "num_base_bdevs_discovered": 2, 00:15:44.694 "num_base_bdevs_operational": 4, 00:15:44.694 "base_bdevs_list": [ 00:15:44.694 { 00:15:44.694 "name": "BaseBdev1", 00:15:44.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.694 "is_configured": false, 00:15:44.694 "data_offset": 0, 00:15:44.694 "data_size": 0 00:15:44.694 }, 00:15:44.694 { 00:15:44.694 "name": null, 00:15:44.694 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:44.694 "is_configured": false, 00:15:44.694 "data_offset": 0, 00:15:44.694 "data_size": 65536 00:15:44.694 }, 00:15:44.694 { 00:15:44.694 "name": "BaseBdev3", 00:15:44.694 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:44.694 "is_configured": true, 00:15:44.694 "data_offset": 0, 00:15:44.694 "data_size": 65536 00:15:44.694 }, 00:15:44.694 { 00:15:44.694 "name": "BaseBdev4", 00:15:44.694 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:44.694 "is_configured": true, 00:15:44.694 "data_offset": 0, 00:15:44.694 "data_size": 65536 00:15:44.694 } 00:15:44.694 ] 00:15:44.694 }' 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.694 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.319 [2024-12-12 19:44:27.880364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.319 BaseBdev1 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.319 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.319 [ 00:15:45.319 { 00:15:45.319 "name": "BaseBdev1", 00:15:45.319 "aliases": [ 00:15:45.319 "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696" 00:15:45.319 ], 00:15:45.319 "product_name": "Malloc disk", 00:15:45.319 "block_size": 512, 00:15:45.319 "num_blocks": 65536, 00:15:45.319 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:45.319 "assigned_rate_limits": { 00:15:45.319 "rw_ios_per_sec": 0, 00:15:45.319 "rw_mbytes_per_sec": 0, 00:15:45.319 "r_mbytes_per_sec": 0, 00:15:45.319 "w_mbytes_per_sec": 0 00:15:45.319 }, 00:15:45.319 "claimed": true, 00:15:45.319 "claim_type": "exclusive_write", 00:15:45.319 "zoned": false, 00:15:45.319 "supported_io_types": { 00:15:45.319 "read": true, 00:15:45.319 "write": true, 00:15:45.319 "unmap": true, 00:15:45.319 "flush": true, 00:15:45.319 "reset": true, 00:15:45.319 "nvme_admin": false, 00:15:45.319 "nvme_io": false, 00:15:45.319 "nvme_io_md": false, 00:15:45.319 "write_zeroes": true, 00:15:45.319 "zcopy": true, 00:15:45.319 "get_zone_info": false, 00:15:45.319 "zone_management": false, 00:15:45.320 "zone_append": false, 00:15:45.320 "compare": false, 00:15:45.320 "compare_and_write": false, 00:15:45.320 "abort": true, 00:15:45.320 "seek_hole": false, 00:15:45.320 "seek_data": false, 00:15:45.320 "copy": true, 00:15:45.320 "nvme_iov_md": false 00:15:45.320 }, 00:15:45.320 "memory_domains": [ 00:15:45.320 { 00:15:45.320 "dma_device_id": "system", 00:15:45.320 "dma_device_type": 1 00:15:45.320 }, 00:15:45.320 { 00:15:45.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.320 "dma_device_type": 2 00:15:45.320 } 00:15:45.320 ], 00:15:45.320 "driver_specific": {} 00:15:45.320 } 00:15:45.320 ] 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.320 "name": "Existed_Raid", 00:15:45.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.320 "strip_size_kb": 64, 00:15:45.320 "state": "configuring", 00:15:45.320 "raid_level": "raid5f", 00:15:45.320 "superblock": false, 00:15:45.320 "num_base_bdevs": 4, 00:15:45.320 "num_base_bdevs_discovered": 3, 00:15:45.320 "num_base_bdevs_operational": 4, 00:15:45.320 "base_bdevs_list": [ 00:15:45.320 { 00:15:45.320 "name": "BaseBdev1", 00:15:45.320 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:45.320 "is_configured": true, 00:15:45.320 "data_offset": 0, 00:15:45.320 "data_size": 65536 00:15:45.320 }, 00:15:45.320 { 00:15:45.320 "name": null, 00:15:45.320 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:45.320 "is_configured": false, 00:15:45.320 "data_offset": 0, 00:15:45.320 "data_size": 65536 00:15:45.320 }, 00:15:45.320 { 00:15:45.320 "name": "BaseBdev3", 00:15:45.320 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:45.320 "is_configured": true, 00:15:45.320 "data_offset": 0, 00:15:45.320 "data_size": 65536 00:15:45.320 }, 00:15:45.320 { 00:15:45.320 "name": "BaseBdev4", 00:15:45.320 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:45.320 "is_configured": true, 00:15:45.320 "data_offset": 0, 00:15:45.320 "data_size": 65536 00:15:45.320 } 00:15:45.320 ] 00:15:45.320 }' 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.320 19:44:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.579 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 [2024-12-12 19:44:28.371619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.580 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.839 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.839 "name": "Existed_Raid", 00:15:45.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.839 "strip_size_kb": 64, 00:15:45.839 "state": "configuring", 00:15:45.839 "raid_level": "raid5f", 00:15:45.839 "superblock": false, 00:15:45.839 "num_base_bdevs": 4, 00:15:45.839 "num_base_bdevs_discovered": 2, 00:15:45.839 "num_base_bdevs_operational": 4, 00:15:45.839 "base_bdevs_list": [ 00:15:45.839 { 00:15:45.839 "name": "BaseBdev1", 00:15:45.839 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:45.839 "is_configured": true, 00:15:45.839 "data_offset": 0, 00:15:45.839 "data_size": 65536 00:15:45.839 }, 00:15:45.839 { 00:15:45.839 "name": null, 00:15:45.839 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:45.839 "is_configured": false, 00:15:45.839 "data_offset": 0, 00:15:45.839 "data_size": 65536 00:15:45.839 }, 00:15:45.839 { 00:15:45.839 "name": null, 00:15:45.839 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:45.839 "is_configured": false, 00:15:45.839 "data_offset": 0, 00:15:45.839 "data_size": 65536 00:15:45.839 }, 00:15:45.839 { 00:15:45.839 "name": "BaseBdev4", 00:15:45.839 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:45.839 "is_configured": true, 00:15:45.839 "data_offset": 0, 00:15:45.839 "data_size": 65536 00:15:45.839 } 00:15:45.839 ] 00:15:45.839 }' 00:15:45.839 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.839 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.099 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.099 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.099 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.100 [2024-12-12 19:44:28.854742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.100 "name": "Existed_Raid", 00:15:46.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.100 "strip_size_kb": 64, 00:15:46.100 "state": "configuring", 00:15:46.100 "raid_level": "raid5f", 00:15:46.100 "superblock": false, 00:15:46.100 "num_base_bdevs": 4, 00:15:46.100 "num_base_bdevs_discovered": 3, 00:15:46.100 "num_base_bdevs_operational": 4, 00:15:46.100 "base_bdevs_list": [ 00:15:46.100 { 00:15:46.100 "name": "BaseBdev1", 00:15:46.100 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:46.100 "is_configured": true, 00:15:46.100 "data_offset": 0, 00:15:46.100 "data_size": 65536 00:15:46.100 }, 00:15:46.100 { 00:15:46.100 "name": null, 00:15:46.100 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:46.100 "is_configured": false, 00:15:46.100 "data_offset": 0, 00:15:46.100 "data_size": 65536 00:15:46.100 }, 00:15:46.100 { 00:15:46.100 "name": "BaseBdev3", 00:15:46.100 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:46.100 "is_configured": true, 00:15:46.100 "data_offset": 0, 00:15:46.100 "data_size": 65536 00:15:46.100 }, 00:15:46.100 { 00:15:46.100 "name": "BaseBdev4", 00:15:46.100 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:46.100 "is_configured": true, 00:15:46.100 "data_offset": 0, 00:15:46.100 "data_size": 65536 00:15:46.100 } 00:15:46.100 ] 00:15:46.100 }' 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.100 19:44:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.670 [2024-12-12 19:44:29.382194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.670 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.930 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.930 "name": "Existed_Raid", 00:15:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.930 "strip_size_kb": 64, 00:15:46.930 "state": "configuring", 00:15:46.930 "raid_level": "raid5f", 00:15:46.930 "superblock": false, 00:15:46.930 "num_base_bdevs": 4, 00:15:46.930 "num_base_bdevs_discovered": 2, 00:15:46.930 "num_base_bdevs_operational": 4, 00:15:46.930 "base_bdevs_list": [ 00:15:46.930 { 00:15:46.930 "name": null, 00:15:46.930 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:46.930 "is_configured": false, 00:15:46.930 "data_offset": 0, 00:15:46.930 "data_size": 65536 00:15:46.930 }, 00:15:46.930 { 00:15:46.930 "name": null, 00:15:46.930 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:46.930 "is_configured": false, 00:15:46.930 "data_offset": 0, 00:15:46.930 "data_size": 65536 00:15:46.930 }, 00:15:46.930 { 00:15:46.930 "name": "BaseBdev3", 00:15:46.930 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:46.930 "is_configured": true, 00:15:46.930 "data_offset": 0, 00:15:46.930 "data_size": 65536 00:15:46.930 }, 00:15:46.930 { 00:15:46.930 "name": "BaseBdev4", 00:15:46.930 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:46.930 "is_configured": true, 00:15:46.930 "data_offset": 0, 00:15:46.930 "data_size": 65536 00:15:46.930 } 00:15:46.930 ] 00:15:46.930 }' 00:15:46.930 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.930 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.190 [2024-12-12 19:44:29.916859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.190 "name": "Existed_Raid", 00:15:47.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.190 "strip_size_kb": 64, 00:15:47.190 "state": "configuring", 00:15:47.190 "raid_level": "raid5f", 00:15:47.190 "superblock": false, 00:15:47.190 "num_base_bdevs": 4, 00:15:47.190 "num_base_bdevs_discovered": 3, 00:15:47.190 "num_base_bdevs_operational": 4, 00:15:47.190 "base_bdevs_list": [ 00:15:47.190 { 00:15:47.190 "name": null, 00:15:47.190 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:47.190 "is_configured": false, 00:15:47.190 "data_offset": 0, 00:15:47.190 "data_size": 65536 00:15:47.190 }, 00:15:47.190 { 00:15:47.190 "name": "BaseBdev2", 00:15:47.190 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:47.190 "is_configured": true, 00:15:47.190 "data_offset": 0, 00:15:47.190 "data_size": 65536 00:15:47.190 }, 00:15:47.190 { 00:15:47.190 "name": "BaseBdev3", 00:15:47.190 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:47.190 "is_configured": true, 00:15:47.190 "data_offset": 0, 00:15:47.190 "data_size": 65536 00:15:47.190 }, 00:15:47.190 { 00:15:47.190 "name": "BaseBdev4", 00:15:47.190 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:47.190 "is_configured": true, 00:15:47.190 "data_offset": 0, 00:15:47.190 "data_size": 65536 00:15:47.190 } 00:15:47.190 ] 00:15:47.190 }' 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.190 19:44:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42cd807d-1c48-4eb9-bca3-bf3ec9a2e696 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 [2024-12-12 19:44:30.463543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:47.760 [2024-12-12 19:44:30.463669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:47.760 [2024-12-12 19:44:30.463694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:47.760 [2024-12-12 19:44:30.463983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:47.760 [2024-12-12 19:44:30.470641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:47.760 NewBaseBdev 00:15:47.760 [2024-12-12 19:44:30.470698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:47.760 [2024-12-12 19:44:30.470959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 [ 00:15:47.760 { 00:15:47.760 "name": "NewBaseBdev", 00:15:47.760 "aliases": [ 00:15:47.760 "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696" 00:15:47.760 ], 00:15:47.760 "product_name": "Malloc disk", 00:15:47.760 "block_size": 512, 00:15:47.760 "num_blocks": 65536, 00:15:47.760 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:47.760 "assigned_rate_limits": { 00:15:47.760 "rw_ios_per_sec": 0, 00:15:47.760 "rw_mbytes_per_sec": 0, 00:15:47.760 "r_mbytes_per_sec": 0, 00:15:47.760 "w_mbytes_per_sec": 0 00:15:47.760 }, 00:15:47.760 "claimed": true, 00:15:47.760 "claim_type": "exclusive_write", 00:15:47.760 "zoned": false, 00:15:47.760 "supported_io_types": { 00:15:47.760 "read": true, 00:15:47.760 "write": true, 00:15:47.760 "unmap": true, 00:15:47.760 "flush": true, 00:15:47.760 "reset": true, 00:15:47.760 "nvme_admin": false, 00:15:47.760 "nvme_io": false, 00:15:47.760 "nvme_io_md": false, 00:15:47.760 "write_zeroes": true, 00:15:47.760 "zcopy": true, 00:15:47.760 "get_zone_info": false, 00:15:47.760 "zone_management": false, 00:15:47.760 "zone_append": false, 00:15:47.760 "compare": false, 00:15:47.760 "compare_and_write": false, 00:15:47.760 "abort": true, 00:15:47.760 "seek_hole": false, 00:15:47.760 "seek_data": false, 00:15:47.760 "copy": true, 00:15:47.760 "nvme_iov_md": false 00:15:47.760 }, 00:15:47.760 "memory_domains": [ 00:15:47.760 { 00:15:47.760 "dma_device_id": "system", 00:15:47.760 "dma_device_type": 1 00:15:47.760 }, 00:15:47.760 { 00:15:47.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.760 "dma_device_type": 2 00:15:47.760 } 00:15:47.760 ], 00:15:47.760 "driver_specific": {} 00:15:47.760 } 00:15:47.760 ] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.760 "name": "Existed_Raid", 00:15:47.760 "uuid": "611d8678-4acd-454f-a457-17f5d8a4e1a5", 00:15:47.760 "strip_size_kb": 64, 00:15:47.760 "state": "online", 00:15:47.760 "raid_level": "raid5f", 00:15:47.760 "superblock": false, 00:15:47.760 "num_base_bdevs": 4, 00:15:47.760 "num_base_bdevs_discovered": 4, 00:15:47.760 "num_base_bdevs_operational": 4, 00:15:47.760 "base_bdevs_list": [ 00:15:47.760 { 00:15:47.760 "name": "NewBaseBdev", 00:15:47.760 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:47.760 "is_configured": true, 00:15:47.760 "data_offset": 0, 00:15:47.760 "data_size": 65536 00:15:47.760 }, 00:15:47.760 { 00:15:47.760 "name": "BaseBdev2", 00:15:47.760 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:47.760 "is_configured": true, 00:15:47.760 "data_offset": 0, 00:15:47.760 "data_size": 65536 00:15:47.760 }, 00:15:47.760 { 00:15:47.760 "name": "BaseBdev3", 00:15:47.760 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:47.760 "is_configured": true, 00:15:47.760 "data_offset": 0, 00:15:47.760 "data_size": 65536 00:15:47.760 }, 00:15:47.760 { 00:15:47.760 "name": "BaseBdev4", 00:15:47.760 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:47.760 "is_configured": true, 00:15:47.760 "data_offset": 0, 00:15:47.760 "data_size": 65536 00:15:47.760 } 00:15:47.760 ] 00:15:47.760 }' 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.760 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.330 [2024-12-12 19:44:30.942244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.330 "name": "Existed_Raid", 00:15:48.330 "aliases": [ 00:15:48.330 "611d8678-4acd-454f-a457-17f5d8a4e1a5" 00:15:48.330 ], 00:15:48.330 "product_name": "Raid Volume", 00:15:48.330 "block_size": 512, 00:15:48.330 "num_blocks": 196608, 00:15:48.330 "uuid": "611d8678-4acd-454f-a457-17f5d8a4e1a5", 00:15:48.330 "assigned_rate_limits": { 00:15:48.330 "rw_ios_per_sec": 0, 00:15:48.330 "rw_mbytes_per_sec": 0, 00:15:48.330 "r_mbytes_per_sec": 0, 00:15:48.330 "w_mbytes_per_sec": 0 00:15:48.330 }, 00:15:48.330 "claimed": false, 00:15:48.330 "zoned": false, 00:15:48.330 "supported_io_types": { 00:15:48.330 "read": true, 00:15:48.330 "write": true, 00:15:48.330 "unmap": false, 00:15:48.330 "flush": false, 00:15:48.330 "reset": true, 00:15:48.330 "nvme_admin": false, 00:15:48.330 "nvme_io": false, 00:15:48.330 "nvme_io_md": false, 00:15:48.330 "write_zeroes": true, 00:15:48.330 "zcopy": false, 00:15:48.330 "get_zone_info": false, 00:15:48.330 "zone_management": false, 00:15:48.330 "zone_append": false, 00:15:48.330 "compare": false, 00:15:48.330 "compare_and_write": false, 00:15:48.330 "abort": false, 00:15:48.330 "seek_hole": false, 00:15:48.330 "seek_data": false, 00:15:48.330 "copy": false, 00:15:48.330 "nvme_iov_md": false 00:15:48.330 }, 00:15:48.330 "driver_specific": { 00:15:48.330 "raid": { 00:15:48.330 "uuid": "611d8678-4acd-454f-a457-17f5d8a4e1a5", 00:15:48.330 "strip_size_kb": 64, 00:15:48.330 "state": "online", 00:15:48.330 "raid_level": "raid5f", 00:15:48.330 "superblock": false, 00:15:48.330 "num_base_bdevs": 4, 00:15:48.330 "num_base_bdevs_discovered": 4, 00:15:48.330 "num_base_bdevs_operational": 4, 00:15:48.330 "base_bdevs_list": [ 00:15:48.330 { 00:15:48.330 "name": "NewBaseBdev", 00:15:48.330 "uuid": "42cd807d-1c48-4eb9-bca3-bf3ec9a2e696", 00:15:48.330 "is_configured": true, 00:15:48.330 "data_offset": 0, 00:15:48.330 "data_size": 65536 00:15:48.330 }, 00:15:48.330 { 00:15:48.330 "name": "BaseBdev2", 00:15:48.330 "uuid": "631be27f-51db-42de-b0e8-4cfbf78551e6", 00:15:48.330 "is_configured": true, 00:15:48.330 "data_offset": 0, 00:15:48.330 "data_size": 65536 00:15:48.330 }, 00:15:48.330 { 00:15:48.330 "name": "BaseBdev3", 00:15:48.330 "uuid": "31808590-d938-4cf5-bc8c-80365f59775a", 00:15:48.330 "is_configured": true, 00:15:48.330 "data_offset": 0, 00:15:48.330 "data_size": 65536 00:15:48.330 }, 00:15:48.330 { 00:15:48.330 "name": "BaseBdev4", 00:15:48.330 "uuid": "572e1a69-52d9-486d-be5a-fe7f89ff52c1", 00:15:48.330 "is_configured": true, 00:15:48.330 "data_offset": 0, 00:15:48.330 "data_size": 65536 00:15:48.330 } 00:15:48.330 ] 00:15:48.330 } 00:15:48.330 } 00:15:48.330 }' 00:15:48.330 19:44:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:48.330 BaseBdev2 00:15:48.330 BaseBdev3 00:15:48.330 BaseBdev4' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.330 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.590 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 [2024-12-12 19:44:31.277498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.591 [2024-12-12 19:44:31.277568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.591 [2024-12-12 19:44:31.277652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.591 [2024-12-12 19:44:31.277961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.591 [2024-12-12 19:44:31.278014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84442 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84442 ']' 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84442 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84442 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.591 killing process with pid 84442 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84442' 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 84442 00:15:48.591 [2024-12-12 19:44:31.325967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.591 19:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 84442 00:15:49.160 [2024-12-12 19:44:31.694657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:50.101 00:15:50.101 real 0m11.317s 00:15:50.101 user 0m17.918s 00:15:50.101 sys 0m2.176s 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.101 ************************************ 00:15:50.101 END TEST raid5f_state_function_test 00:15:50.101 ************************************ 00:15:50.101 19:44:32 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:50.101 19:44:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:50.101 19:44:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.101 19:44:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.101 ************************************ 00:15:50.101 START TEST raid5f_state_function_test_sb 00:15:50.101 ************************************ 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85112 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85112' 00:15:50.101 Process raid pid: 85112 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85112 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85112 ']' 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.101 19:44:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.101 [2024-12-12 19:44:32.922981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:50.101 [2024-12-12 19:44:32.923092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.361 [2024-12-12 19:44:33.096238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.361 [2024-12-12 19:44:33.202678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.621 [2024-12-12 19:44:33.396125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.621 [2024-12-12 19:44:33.396164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.191 [2024-12-12 19:44:33.735860] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.191 [2024-12-12 19:44:33.735949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.191 [2024-12-12 19:44:33.735980] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.191 [2024-12-12 19:44:33.736020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.191 [2024-12-12 19:44:33.736059] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.191 [2024-12-12 19:44:33.736093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.191 [2024-12-12 19:44:33.736117] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.191 [2024-12-12 19:44:33.736138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.191 "name": "Existed_Raid", 00:15:51.191 "uuid": "fa395cd9-a3a4-4507-b1e9-bec758de1132", 00:15:51.191 "strip_size_kb": 64, 00:15:51.191 "state": "configuring", 00:15:51.191 "raid_level": "raid5f", 00:15:51.191 "superblock": true, 00:15:51.191 "num_base_bdevs": 4, 00:15:51.191 "num_base_bdevs_discovered": 0, 00:15:51.191 "num_base_bdevs_operational": 4, 00:15:51.191 "base_bdevs_list": [ 00:15:51.191 { 00:15:51.191 "name": "BaseBdev1", 00:15:51.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.191 "is_configured": false, 00:15:51.191 "data_offset": 0, 00:15:51.191 "data_size": 0 00:15:51.191 }, 00:15:51.191 { 00:15:51.191 "name": "BaseBdev2", 00:15:51.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.191 "is_configured": false, 00:15:51.191 "data_offset": 0, 00:15:51.191 "data_size": 0 00:15:51.191 }, 00:15:51.191 { 00:15:51.191 "name": "BaseBdev3", 00:15:51.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.191 "is_configured": false, 00:15:51.191 "data_offset": 0, 00:15:51.191 "data_size": 0 00:15:51.191 }, 00:15:51.191 { 00:15:51.191 "name": "BaseBdev4", 00:15:51.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.191 "is_configured": false, 00:15:51.191 "data_offset": 0, 00:15:51.191 "data_size": 0 00:15:51.191 } 00:15:51.191 ] 00:15:51.191 }' 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.191 19:44:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 [2024-12-12 19:44:34.147135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.451 [2024-12-12 19:44:34.147207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 [2024-12-12 19:44:34.159133] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.451 [2024-12-12 19:44:34.159173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.451 [2024-12-12 19:44:34.159181] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.451 [2024-12-12 19:44:34.159190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.451 [2024-12-12 19:44:34.159195] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.451 [2024-12-12 19:44:34.159204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.451 [2024-12-12 19:44:34.159209] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.451 [2024-12-12 19:44:34.159217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 [2024-12-12 19:44:34.205373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.451 BaseBdev1 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:51.451 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.452 [ 00:15:51.452 { 00:15:51.452 "name": "BaseBdev1", 00:15:51.452 "aliases": [ 00:15:51.452 "103dbf58-e902-4162-a988-f163353b06bd" 00:15:51.452 ], 00:15:51.452 "product_name": "Malloc disk", 00:15:51.452 "block_size": 512, 00:15:51.452 "num_blocks": 65536, 00:15:51.452 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:51.452 "assigned_rate_limits": { 00:15:51.452 "rw_ios_per_sec": 0, 00:15:51.452 "rw_mbytes_per_sec": 0, 00:15:51.452 "r_mbytes_per_sec": 0, 00:15:51.452 "w_mbytes_per_sec": 0 00:15:51.452 }, 00:15:51.452 "claimed": true, 00:15:51.452 "claim_type": "exclusive_write", 00:15:51.452 "zoned": false, 00:15:51.452 "supported_io_types": { 00:15:51.452 "read": true, 00:15:51.452 "write": true, 00:15:51.452 "unmap": true, 00:15:51.452 "flush": true, 00:15:51.452 "reset": true, 00:15:51.452 "nvme_admin": false, 00:15:51.452 "nvme_io": false, 00:15:51.452 "nvme_io_md": false, 00:15:51.452 "write_zeroes": true, 00:15:51.452 "zcopy": true, 00:15:51.452 "get_zone_info": false, 00:15:51.452 "zone_management": false, 00:15:51.452 "zone_append": false, 00:15:51.452 "compare": false, 00:15:51.452 "compare_and_write": false, 00:15:51.452 "abort": true, 00:15:51.452 "seek_hole": false, 00:15:51.452 "seek_data": false, 00:15:51.452 "copy": true, 00:15:51.452 "nvme_iov_md": false 00:15:51.452 }, 00:15:51.452 "memory_domains": [ 00:15:51.452 { 00:15:51.452 "dma_device_id": "system", 00:15:51.452 "dma_device_type": 1 00:15:51.452 }, 00:15:51.452 { 00:15:51.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.452 "dma_device_type": 2 00:15:51.452 } 00:15:51.452 ], 00:15:51.452 "driver_specific": {} 00:15:51.452 } 00:15:51.452 ] 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.452 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.711 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.711 "name": "Existed_Raid", 00:15:51.711 "uuid": "fc83db97-ef93-4555-9a6c-5932143d7e7f", 00:15:51.711 "strip_size_kb": 64, 00:15:51.711 "state": "configuring", 00:15:51.711 "raid_level": "raid5f", 00:15:51.711 "superblock": true, 00:15:51.711 "num_base_bdevs": 4, 00:15:51.711 "num_base_bdevs_discovered": 1, 00:15:51.711 "num_base_bdevs_operational": 4, 00:15:51.711 "base_bdevs_list": [ 00:15:51.711 { 00:15:51.711 "name": "BaseBdev1", 00:15:51.711 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:51.711 "is_configured": true, 00:15:51.711 "data_offset": 2048, 00:15:51.711 "data_size": 63488 00:15:51.711 }, 00:15:51.711 { 00:15:51.711 "name": "BaseBdev2", 00:15:51.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.711 "is_configured": false, 00:15:51.711 "data_offset": 0, 00:15:51.711 "data_size": 0 00:15:51.711 }, 00:15:51.711 { 00:15:51.711 "name": "BaseBdev3", 00:15:51.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.711 "is_configured": false, 00:15:51.711 "data_offset": 0, 00:15:51.711 "data_size": 0 00:15:51.711 }, 00:15:51.711 { 00:15:51.711 "name": "BaseBdev4", 00:15:51.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.711 "is_configured": false, 00:15:51.711 "data_offset": 0, 00:15:51.711 "data_size": 0 00:15:51.711 } 00:15:51.711 ] 00:15:51.711 }' 00:15:51.711 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.711 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 [2024-12-12 19:44:34.676605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.971 [2024-12-12 19:44:34.676678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 [2024-12-12 19:44:34.688644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.971 [2024-12-12 19:44:34.690367] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.971 [2024-12-12 19:44:34.690439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.971 [2024-12-12 19:44:34.690451] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.971 [2024-12-12 19:44:34.690461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.971 [2024-12-12 19:44:34.690468] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.971 [2024-12-12 19:44:34.690475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.971 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.971 "name": "Existed_Raid", 00:15:51.971 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:51.971 "strip_size_kb": 64, 00:15:51.971 "state": "configuring", 00:15:51.971 "raid_level": "raid5f", 00:15:51.971 "superblock": true, 00:15:51.971 "num_base_bdevs": 4, 00:15:51.971 "num_base_bdevs_discovered": 1, 00:15:51.971 "num_base_bdevs_operational": 4, 00:15:51.971 "base_bdevs_list": [ 00:15:51.971 { 00:15:51.971 "name": "BaseBdev1", 00:15:51.971 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:51.971 "is_configured": true, 00:15:51.971 "data_offset": 2048, 00:15:51.971 "data_size": 63488 00:15:51.971 }, 00:15:51.971 { 00:15:51.971 "name": "BaseBdev2", 00:15:51.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.971 "is_configured": false, 00:15:51.971 "data_offset": 0, 00:15:51.971 "data_size": 0 00:15:51.971 }, 00:15:51.971 { 00:15:51.971 "name": "BaseBdev3", 00:15:51.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.971 "is_configured": false, 00:15:51.971 "data_offset": 0, 00:15:51.971 "data_size": 0 00:15:51.971 }, 00:15:51.971 { 00:15:51.971 "name": "BaseBdev4", 00:15:51.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.972 "is_configured": false, 00:15:51.972 "data_offset": 0, 00:15:51.972 "data_size": 0 00:15:51.972 } 00:15:51.972 ] 00:15:51.972 }' 00:15:51.972 19:44:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.972 19:44:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 [2024-12-12 19:44:35.140931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.541 BaseBdev2 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 [ 00:15:52.541 { 00:15:52.541 "name": "BaseBdev2", 00:15:52.541 "aliases": [ 00:15:52.541 "988dac89-94a7-485f-a284-eb97804da391" 00:15:52.541 ], 00:15:52.541 "product_name": "Malloc disk", 00:15:52.541 "block_size": 512, 00:15:52.541 "num_blocks": 65536, 00:15:52.541 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:52.541 "assigned_rate_limits": { 00:15:52.541 "rw_ios_per_sec": 0, 00:15:52.541 "rw_mbytes_per_sec": 0, 00:15:52.541 "r_mbytes_per_sec": 0, 00:15:52.541 "w_mbytes_per_sec": 0 00:15:52.541 }, 00:15:52.541 "claimed": true, 00:15:52.541 "claim_type": "exclusive_write", 00:15:52.541 "zoned": false, 00:15:52.541 "supported_io_types": { 00:15:52.541 "read": true, 00:15:52.541 "write": true, 00:15:52.541 "unmap": true, 00:15:52.541 "flush": true, 00:15:52.541 "reset": true, 00:15:52.541 "nvme_admin": false, 00:15:52.541 "nvme_io": false, 00:15:52.541 "nvme_io_md": false, 00:15:52.541 "write_zeroes": true, 00:15:52.541 "zcopy": true, 00:15:52.541 "get_zone_info": false, 00:15:52.541 "zone_management": false, 00:15:52.541 "zone_append": false, 00:15:52.541 "compare": false, 00:15:52.541 "compare_and_write": false, 00:15:52.541 "abort": true, 00:15:52.541 "seek_hole": false, 00:15:52.541 "seek_data": false, 00:15:52.541 "copy": true, 00:15:52.541 "nvme_iov_md": false 00:15:52.541 }, 00:15:52.541 "memory_domains": [ 00:15:52.541 { 00:15:52.541 "dma_device_id": "system", 00:15:52.541 "dma_device_type": 1 00:15:52.541 }, 00:15:52.541 { 00:15:52.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.541 "dma_device_type": 2 00:15:52.541 } 00:15:52.541 ], 00:15:52.541 "driver_specific": {} 00:15:52.541 } 00:15:52.541 ] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.541 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.541 "name": "Existed_Raid", 00:15:52.541 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:52.541 "strip_size_kb": 64, 00:15:52.541 "state": "configuring", 00:15:52.541 "raid_level": "raid5f", 00:15:52.541 "superblock": true, 00:15:52.541 "num_base_bdevs": 4, 00:15:52.541 "num_base_bdevs_discovered": 2, 00:15:52.541 "num_base_bdevs_operational": 4, 00:15:52.541 "base_bdevs_list": [ 00:15:52.541 { 00:15:52.541 "name": "BaseBdev1", 00:15:52.541 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:52.541 "is_configured": true, 00:15:52.541 "data_offset": 2048, 00:15:52.541 "data_size": 63488 00:15:52.541 }, 00:15:52.541 { 00:15:52.541 "name": "BaseBdev2", 00:15:52.541 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:52.541 "is_configured": true, 00:15:52.541 "data_offset": 2048, 00:15:52.542 "data_size": 63488 00:15:52.542 }, 00:15:52.542 { 00:15:52.542 "name": "BaseBdev3", 00:15:52.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.542 "is_configured": false, 00:15:52.542 "data_offset": 0, 00:15:52.542 "data_size": 0 00:15:52.542 }, 00:15:52.542 { 00:15:52.542 "name": "BaseBdev4", 00:15:52.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.542 "is_configured": false, 00:15:52.542 "data_offset": 0, 00:15:52.542 "data_size": 0 00:15:52.542 } 00:15:52.542 ] 00:15:52.542 }' 00:15:52.542 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.542 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.801 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.801 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.801 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.061 [2024-12-12 19:44:35.664289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.061 BaseBdev3 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.061 [ 00:15:53.061 { 00:15:53.061 "name": "BaseBdev3", 00:15:53.061 "aliases": [ 00:15:53.061 "4af7a881-f251-44a9-983d-df7be28e14c2" 00:15:53.061 ], 00:15:53.061 "product_name": "Malloc disk", 00:15:53.061 "block_size": 512, 00:15:53.061 "num_blocks": 65536, 00:15:53.061 "uuid": "4af7a881-f251-44a9-983d-df7be28e14c2", 00:15:53.061 "assigned_rate_limits": { 00:15:53.061 "rw_ios_per_sec": 0, 00:15:53.061 "rw_mbytes_per_sec": 0, 00:15:53.061 "r_mbytes_per_sec": 0, 00:15:53.061 "w_mbytes_per_sec": 0 00:15:53.061 }, 00:15:53.061 "claimed": true, 00:15:53.061 "claim_type": "exclusive_write", 00:15:53.061 "zoned": false, 00:15:53.061 "supported_io_types": { 00:15:53.061 "read": true, 00:15:53.061 "write": true, 00:15:53.061 "unmap": true, 00:15:53.061 "flush": true, 00:15:53.061 "reset": true, 00:15:53.061 "nvme_admin": false, 00:15:53.061 "nvme_io": false, 00:15:53.061 "nvme_io_md": false, 00:15:53.061 "write_zeroes": true, 00:15:53.061 "zcopy": true, 00:15:53.061 "get_zone_info": false, 00:15:53.061 "zone_management": false, 00:15:53.061 "zone_append": false, 00:15:53.061 "compare": false, 00:15:53.061 "compare_and_write": false, 00:15:53.061 "abort": true, 00:15:53.061 "seek_hole": false, 00:15:53.061 "seek_data": false, 00:15:53.061 "copy": true, 00:15:53.061 "nvme_iov_md": false 00:15:53.061 }, 00:15:53.061 "memory_domains": [ 00:15:53.061 { 00:15:53.061 "dma_device_id": "system", 00:15:53.061 "dma_device_type": 1 00:15:53.061 }, 00:15:53.061 { 00:15:53.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.061 "dma_device_type": 2 00:15:53.061 } 00:15:53.061 ], 00:15:53.061 "driver_specific": {} 00:15:53.061 } 00:15:53.061 ] 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.061 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.062 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.062 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.062 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.062 "name": "Existed_Raid", 00:15:53.062 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:53.062 "strip_size_kb": 64, 00:15:53.062 "state": "configuring", 00:15:53.062 "raid_level": "raid5f", 00:15:53.062 "superblock": true, 00:15:53.062 "num_base_bdevs": 4, 00:15:53.062 "num_base_bdevs_discovered": 3, 00:15:53.062 "num_base_bdevs_operational": 4, 00:15:53.062 "base_bdevs_list": [ 00:15:53.062 { 00:15:53.062 "name": "BaseBdev1", 00:15:53.062 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:53.062 "is_configured": true, 00:15:53.062 "data_offset": 2048, 00:15:53.062 "data_size": 63488 00:15:53.062 }, 00:15:53.062 { 00:15:53.062 "name": "BaseBdev2", 00:15:53.062 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:53.062 "is_configured": true, 00:15:53.062 "data_offset": 2048, 00:15:53.062 "data_size": 63488 00:15:53.062 }, 00:15:53.062 { 00:15:53.062 "name": "BaseBdev3", 00:15:53.062 "uuid": "4af7a881-f251-44a9-983d-df7be28e14c2", 00:15:53.062 "is_configured": true, 00:15:53.062 "data_offset": 2048, 00:15:53.062 "data_size": 63488 00:15:53.062 }, 00:15:53.062 { 00:15:53.062 "name": "BaseBdev4", 00:15:53.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.062 "is_configured": false, 00:15:53.062 "data_offset": 0, 00:15:53.062 "data_size": 0 00:15:53.062 } 00:15:53.062 ] 00:15:53.062 }' 00:15:53.062 19:44:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.062 19:44:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.321 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:53.321 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.321 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.582 [2024-12-12 19:44:36.172741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.582 [2024-12-12 19:44:36.172994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:53.582 [2024-12-12 19:44:36.173012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.582 [2024-12-12 19:44:36.173270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:53.582 BaseBdev4 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.582 [2024-12-12 19:44:36.180276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:53.582 [2024-12-12 19:44:36.180300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:53.582 [2024-12-12 19:44:36.180461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.582 [ 00:15:53.582 { 00:15:53.582 "name": "BaseBdev4", 00:15:53.582 "aliases": [ 00:15:53.582 "b01c73c8-28be-4ba6-b779-7021bbe471fe" 00:15:53.582 ], 00:15:53.582 "product_name": "Malloc disk", 00:15:53.582 "block_size": 512, 00:15:53.582 "num_blocks": 65536, 00:15:53.582 "uuid": "b01c73c8-28be-4ba6-b779-7021bbe471fe", 00:15:53.582 "assigned_rate_limits": { 00:15:53.582 "rw_ios_per_sec": 0, 00:15:53.582 "rw_mbytes_per_sec": 0, 00:15:53.582 "r_mbytes_per_sec": 0, 00:15:53.582 "w_mbytes_per_sec": 0 00:15:53.582 }, 00:15:53.582 "claimed": true, 00:15:53.582 "claim_type": "exclusive_write", 00:15:53.582 "zoned": false, 00:15:53.582 "supported_io_types": { 00:15:53.582 "read": true, 00:15:53.582 "write": true, 00:15:53.582 "unmap": true, 00:15:53.582 "flush": true, 00:15:53.582 "reset": true, 00:15:53.582 "nvme_admin": false, 00:15:53.582 "nvme_io": false, 00:15:53.582 "nvme_io_md": false, 00:15:53.582 "write_zeroes": true, 00:15:53.582 "zcopy": true, 00:15:53.582 "get_zone_info": false, 00:15:53.582 "zone_management": false, 00:15:53.582 "zone_append": false, 00:15:53.582 "compare": false, 00:15:53.582 "compare_and_write": false, 00:15:53.582 "abort": true, 00:15:53.582 "seek_hole": false, 00:15:53.582 "seek_data": false, 00:15:53.582 "copy": true, 00:15:53.582 "nvme_iov_md": false 00:15:53.582 }, 00:15:53.582 "memory_domains": [ 00:15:53.582 { 00:15:53.582 "dma_device_id": "system", 00:15:53.582 "dma_device_type": 1 00:15:53.582 }, 00:15:53.582 { 00:15:53.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.582 "dma_device_type": 2 00:15:53.582 } 00:15:53.582 ], 00:15:53.582 "driver_specific": {} 00:15:53.582 } 00:15:53.582 ] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.582 "name": "Existed_Raid", 00:15:53.582 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:53.582 "strip_size_kb": 64, 00:15:53.582 "state": "online", 00:15:53.582 "raid_level": "raid5f", 00:15:53.582 "superblock": true, 00:15:53.582 "num_base_bdevs": 4, 00:15:53.582 "num_base_bdevs_discovered": 4, 00:15:53.582 "num_base_bdevs_operational": 4, 00:15:53.582 "base_bdevs_list": [ 00:15:53.582 { 00:15:53.582 "name": "BaseBdev1", 00:15:53.582 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:53.582 "is_configured": true, 00:15:53.582 "data_offset": 2048, 00:15:53.582 "data_size": 63488 00:15:53.582 }, 00:15:53.582 { 00:15:53.582 "name": "BaseBdev2", 00:15:53.582 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:53.582 "is_configured": true, 00:15:53.582 "data_offset": 2048, 00:15:53.582 "data_size": 63488 00:15:53.582 }, 00:15:53.582 { 00:15:53.582 "name": "BaseBdev3", 00:15:53.582 "uuid": "4af7a881-f251-44a9-983d-df7be28e14c2", 00:15:53.582 "is_configured": true, 00:15:53.582 "data_offset": 2048, 00:15:53.582 "data_size": 63488 00:15:53.582 }, 00:15:53.582 { 00:15:53.582 "name": "BaseBdev4", 00:15:53.582 "uuid": "b01c73c8-28be-4ba6-b779-7021bbe471fe", 00:15:53.582 "is_configured": true, 00:15:53.582 "data_offset": 2048, 00:15:53.582 "data_size": 63488 00:15:53.582 } 00:15:53.582 ] 00:15:53.582 }' 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.582 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.151 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.152 [2024-12-12 19:44:36.699666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.152 "name": "Existed_Raid", 00:15:54.152 "aliases": [ 00:15:54.152 "5ba3281c-de64-445e-822a-e1840b449952" 00:15:54.152 ], 00:15:54.152 "product_name": "Raid Volume", 00:15:54.152 "block_size": 512, 00:15:54.152 "num_blocks": 190464, 00:15:54.152 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:54.152 "assigned_rate_limits": { 00:15:54.152 "rw_ios_per_sec": 0, 00:15:54.152 "rw_mbytes_per_sec": 0, 00:15:54.152 "r_mbytes_per_sec": 0, 00:15:54.152 "w_mbytes_per_sec": 0 00:15:54.152 }, 00:15:54.152 "claimed": false, 00:15:54.152 "zoned": false, 00:15:54.152 "supported_io_types": { 00:15:54.152 "read": true, 00:15:54.152 "write": true, 00:15:54.152 "unmap": false, 00:15:54.152 "flush": false, 00:15:54.152 "reset": true, 00:15:54.152 "nvme_admin": false, 00:15:54.152 "nvme_io": false, 00:15:54.152 "nvme_io_md": false, 00:15:54.152 "write_zeroes": true, 00:15:54.152 "zcopy": false, 00:15:54.152 "get_zone_info": false, 00:15:54.152 "zone_management": false, 00:15:54.152 "zone_append": false, 00:15:54.152 "compare": false, 00:15:54.152 "compare_and_write": false, 00:15:54.152 "abort": false, 00:15:54.152 "seek_hole": false, 00:15:54.152 "seek_data": false, 00:15:54.152 "copy": false, 00:15:54.152 "nvme_iov_md": false 00:15:54.152 }, 00:15:54.152 "driver_specific": { 00:15:54.152 "raid": { 00:15:54.152 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:54.152 "strip_size_kb": 64, 00:15:54.152 "state": "online", 00:15:54.152 "raid_level": "raid5f", 00:15:54.152 "superblock": true, 00:15:54.152 "num_base_bdevs": 4, 00:15:54.152 "num_base_bdevs_discovered": 4, 00:15:54.152 "num_base_bdevs_operational": 4, 00:15:54.152 "base_bdevs_list": [ 00:15:54.152 { 00:15:54.152 "name": "BaseBdev1", 00:15:54.152 "uuid": "103dbf58-e902-4162-a988-f163353b06bd", 00:15:54.152 "is_configured": true, 00:15:54.152 "data_offset": 2048, 00:15:54.152 "data_size": 63488 00:15:54.152 }, 00:15:54.152 { 00:15:54.152 "name": "BaseBdev2", 00:15:54.152 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:54.152 "is_configured": true, 00:15:54.152 "data_offset": 2048, 00:15:54.152 "data_size": 63488 00:15:54.152 }, 00:15:54.152 { 00:15:54.152 "name": "BaseBdev3", 00:15:54.152 "uuid": "4af7a881-f251-44a9-983d-df7be28e14c2", 00:15:54.152 "is_configured": true, 00:15:54.152 "data_offset": 2048, 00:15:54.152 "data_size": 63488 00:15:54.152 }, 00:15:54.152 { 00:15:54.152 "name": "BaseBdev4", 00:15:54.152 "uuid": "b01c73c8-28be-4ba6-b779-7021bbe471fe", 00:15:54.152 "is_configured": true, 00:15:54.152 "data_offset": 2048, 00:15:54.152 "data_size": 63488 00:15:54.152 } 00:15:54.152 ] 00:15:54.152 } 00:15:54.152 } 00:15:54.152 }' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:54.152 BaseBdev2 00:15:54.152 BaseBdev3 00:15:54.152 BaseBdev4' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.152 19:44:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.412 [2024-12-12 19:44:37.010926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.412 "name": "Existed_Raid", 00:15:54.412 "uuid": "5ba3281c-de64-445e-822a-e1840b449952", 00:15:54.412 "strip_size_kb": 64, 00:15:54.412 "state": "online", 00:15:54.412 "raid_level": "raid5f", 00:15:54.412 "superblock": true, 00:15:54.412 "num_base_bdevs": 4, 00:15:54.412 "num_base_bdevs_discovered": 3, 00:15:54.412 "num_base_bdevs_operational": 3, 00:15:54.412 "base_bdevs_list": [ 00:15:54.412 { 00:15:54.412 "name": null, 00:15:54.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.412 "is_configured": false, 00:15:54.412 "data_offset": 0, 00:15:54.412 "data_size": 63488 00:15:54.412 }, 00:15:54.412 { 00:15:54.412 "name": "BaseBdev2", 00:15:54.412 "uuid": "988dac89-94a7-485f-a284-eb97804da391", 00:15:54.412 "is_configured": true, 00:15:54.412 "data_offset": 2048, 00:15:54.412 "data_size": 63488 00:15:54.412 }, 00:15:54.412 { 00:15:54.412 "name": "BaseBdev3", 00:15:54.412 "uuid": "4af7a881-f251-44a9-983d-df7be28e14c2", 00:15:54.412 "is_configured": true, 00:15:54.412 "data_offset": 2048, 00:15:54.412 "data_size": 63488 00:15:54.412 }, 00:15:54.412 { 00:15:54.412 "name": "BaseBdev4", 00:15:54.412 "uuid": "b01c73c8-28be-4ba6-b779-7021bbe471fe", 00:15:54.412 "is_configured": true, 00:15:54.412 "data_offset": 2048, 00:15:54.412 "data_size": 63488 00:15:54.412 } 00:15:54.412 ] 00:15:54.412 }' 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.412 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 [2024-12-12 19:44:37.628283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.981 [2024-12-12 19:44:37.628437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.981 [2024-12-12 19:44:37.716147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.981 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 [2024-12-12 19:44:37.772081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 19:44:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.241 [2024-12-12 19:44:37.921702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:55.241 [2024-12-12 19:44:37.921752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.241 BaseBdev2 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 [ 00:15:55.501 { 00:15:55.501 "name": "BaseBdev2", 00:15:55.501 "aliases": [ 00:15:55.501 "56cc6eb0-edbc-449e-a164-dee062ded7d7" 00:15:55.501 ], 00:15:55.501 "product_name": "Malloc disk", 00:15:55.501 "block_size": 512, 00:15:55.501 "num_blocks": 65536, 00:15:55.501 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:55.501 "assigned_rate_limits": { 00:15:55.501 "rw_ios_per_sec": 0, 00:15:55.501 "rw_mbytes_per_sec": 0, 00:15:55.501 "r_mbytes_per_sec": 0, 00:15:55.501 "w_mbytes_per_sec": 0 00:15:55.501 }, 00:15:55.501 "claimed": false, 00:15:55.501 "zoned": false, 00:15:55.501 "supported_io_types": { 00:15:55.501 "read": true, 00:15:55.501 "write": true, 00:15:55.501 "unmap": true, 00:15:55.501 "flush": true, 00:15:55.501 "reset": true, 00:15:55.501 "nvme_admin": false, 00:15:55.501 "nvme_io": false, 00:15:55.501 "nvme_io_md": false, 00:15:55.501 "write_zeroes": true, 00:15:55.501 "zcopy": true, 00:15:55.501 "get_zone_info": false, 00:15:55.501 "zone_management": false, 00:15:55.501 "zone_append": false, 00:15:55.501 "compare": false, 00:15:55.501 "compare_and_write": false, 00:15:55.501 "abort": true, 00:15:55.501 "seek_hole": false, 00:15:55.501 "seek_data": false, 00:15:55.501 "copy": true, 00:15:55.501 "nvme_iov_md": false 00:15:55.501 }, 00:15:55.501 "memory_domains": [ 00:15:55.501 { 00:15:55.501 "dma_device_id": "system", 00:15:55.501 "dma_device_type": 1 00:15:55.501 }, 00:15:55.501 { 00:15:55.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.501 "dma_device_type": 2 00:15:55.501 } 00:15:55.501 ], 00:15:55.501 "driver_specific": {} 00:15:55.501 } 00:15:55.501 ] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 BaseBdev3 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 [ 00:15:55.501 { 00:15:55.501 "name": "BaseBdev3", 00:15:55.501 "aliases": [ 00:15:55.501 "026992af-98b8-45bb-9452-18b1797f2ebb" 00:15:55.501 ], 00:15:55.501 "product_name": "Malloc disk", 00:15:55.501 "block_size": 512, 00:15:55.501 "num_blocks": 65536, 00:15:55.501 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:55.501 "assigned_rate_limits": { 00:15:55.501 "rw_ios_per_sec": 0, 00:15:55.501 "rw_mbytes_per_sec": 0, 00:15:55.501 "r_mbytes_per_sec": 0, 00:15:55.501 "w_mbytes_per_sec": 0 00:15:55.501 }, 00:15:55.501 "claimed": false, 00:15:55.501 "zoned": false, 00:15:55.501 "supported_io_types": { 00:15:55.501 "read": true, 00:15:55.501 "write": true, 00:15:55.501 "unmap": true, 00:15:55.501 "flush": true, 00:15:55.501 "reset": true, 00:15:55.501 "nvme_admin": false, 00:15:55.501 "nvme_io": false, 00:15:55.501 "nvme_io_md": false, 00:15:55.501 "write_zeroes": true, 00:15:55.501 "zcopy": true, 00:15:55.501 "get_zone_info": false, 00:15:55.501 "zone_management": false, 00:15:55.501 "zone_append": false, 00:15:55.501 "compare": false, 00:15:55.501 "compare_and_write": false, 00:15:55.501 "abort": true, 00:15:55.501 "seek_hole": false, 00:15:55.501 "seek_data": false, 00:15:55.501 "copy": true, 00:15:55.501 "nvme_iov_md": false 00:15:55.501 }, 00:15:55.501 "memory_domains": [ 00:15:55.501 { 00:15:55.501 "dma_device_id": "system", 00:15:55.501 "dma_device_type": 1 00:15:55.501 }, 00:15:55.501 { 00:15:55.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.501 "dma_device_type": 2 00:15:55.501 } 00:15:55.501 ], 00:15:55.501 "driver_specific": {} 00:15:55.501 } 00:15:55.501 ] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 BaseBdev4 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 [ 00:15:55.501 { 00:15:55.501 "name": "BaseBdev4", 00:15:55.501 "aliases": [ 00:15:55.501 "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d" 00:15:55.501 ], 00:15:55.501 "product_name": "Malloc disk", 00:15:55.501 "block_size": 512, 00:15:55.501 "num_blocks": 65536, 00:15:55.501 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:55.501 "assigned_rate_limits": { 00:15:55.501 "rw_ios_per_sec": 0, 00:15:55.501 "rw_mbytes_per_sec": 0, 00:15:55.501 "r_mbytes_per_sec": 0, 00:15:55.501 "w_mbytes_per_sec": 0 00:15:55.501 }, 00:15:55.501 "claimed": false, 00:15:55.501 "zoned": false, 00:15:55.501 "supported_io_types": { 00:15:55.501 "read": true, 00:15:55.501 "write": true, 00:15:55.501 "unmap": true, 00:15:55.501 "flush": true, 00:15:55.501 "reset": true, 00:15:55.501 "nvme_admin": false, 00:15:55.501 "nvme_io": false, 00:15:55.501 "nvme_io_md": false, 00:15:55.501 "write_zeroes": true, 00:15:55.501 "zcopy": true, 00:15:55.501 "get_zone_info": false, 00:15:55.501 "zone_management": false, 00:15:55.501 "zone_append": false, 00:15:55.501 "compare": false, 00:15:55.501 "compare_and_write": false, 00:15:55.501 "abort": true, 00:15:55.501 "seek_hole": false, 00:15:55.501 "seek_data": false, 00:15:55.501 "copy": true, 00:15:55.501 "nvme_iov_md": false 00:15:55.501 }, 00:15:55.501 "memory_domains": [ 00:15:55.501 { 00:15:55.502 "dma_device_id": "system", 00:15:55.502 "dma_device_type": 1 00:15:55.502 }, 00:15:55.502 { 00:15:55.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.502 "dma_device_type": 2 00:15:55.502 } 00:15:55.502 ], 00:15:55.502 "driver_specific": {} 00:15:55.502 } 00:15:55.502 ] 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 [2024-12-12 19:44:38.279108] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.502 [2024-12-12 19:44:38.279149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.502 [2024-12-12 19:44:38.279168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.502 [2024-12-12 19:44:38.280977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.502 [2024-12-12 19:44:38.281033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.502 "name": "Existed_Raid", 00:15:55.502 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:55.502 "strip_size_kb": 64, 00:15:55.502 "state": "configuring", 00:15:55.502 "raid_level": "raid5f", 00:15:55.502 "superblock": true, 00:15:55.502 "num_base_bdevs": 4, 00:15:55.502 "num_base_bdevs_discovered": 3, 00:15:55.502 "num_base_bdevs_operational": 4, 00:15:55.502 "base_bdevs_list": [ 00:15:55.502 { 00:15:55.502 "name": "BaseBdev1", 00:15:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.502 "is_configured": false, 00:15:55.502 "data_offset": 0, 00:15:55.502 "data_size": 0 00:15:55.502 }, 00:15:55.502 { 00:15:55.502 "name": "BaseBdev2", 00:15:55.502 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:55.502 "is_configured": true, 00:15:55.502 "data_offset": 2048, 00:15:55.502 "data_size": 63488 00:15:55.502 }, 00:15:55.502 { 00:15:55.502 "name": "BaseBdev3", 00:15:55.502 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:55.502 "is_configured": true, 00:15:55.502 "data_offset": 2048, 00:15:55.502 "data_size": 63488 00:15:55.502 }, 00:15:55.502 { 00:15:55.502 "name": "BaseBdev4", 00:15:55.502 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:55.502 "is_configured": true, 00:15:55.502 "data_offset": 2048, 00:15:55.502 "data_size": 63488 00:15:55.502 } 00:15:55.502 ] 00:15:55.502 }' 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.502 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 [2024-12-12 19:44:38.742358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.071 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.071 "name": "Existed_Raid", 00:15:56.071 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:56.071 "strip_size_kb": 64, 00:15:56.071 "state": "configuring", 00:15:56.071 "raid_level": "raid5f", 00:15:56.071 "superblock": true, 00:15:56.071 "num_base_bdevs": 4, 00:15:56.071 "num_base_bdevs_discovered": 2, 00:15:56.071 "num_base_bdevs_operational": 4, 00:15:56.071 "base_bdevs_list": [ 00:15:56.071 { 00:15:56.071 "name": "BaseBdev1", 00:15:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.071 "is_configured": false, 00:15:56.071 "data_offset": 0, 00:15:56.071 "data_size": 0 00:15:56.071 }, 00:15:56.071 { 00:15:56.072 "name": null, 00:15:56.072 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:56.072 "is_configured": false, 00:15:56.072 "data_offset": 0, 00:15:56.072 "data_size": 63488 00:15:56.072 }, 00:15:56.072 { 00:15:56.072 "name": "BaseBdev3", 00:15:56.072 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:56.072 "is_configured": true, 00:15:56.072 "data_offset": 2048, 00:15:56.072 "data_size": 63488 00:15:56.072 }, 00:15:56.072 { 00:15:56.072 "name": "BaseBdev4", 00:15:56.072 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:56.072 "is_configured": true, 00:15:56.072 "data_offset": 2048, 00:15:56.072 "data_size": 63488 00:15:56.072 } 00:15:56.072 ] 00:15:56.072 }' 00:15:56.072 19:44:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.072 19:44:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 [2024-12-12 19:44:39.264454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.640 BaseBdev1 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.640 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.641 [ 00:15:56.641 { 00:15:56.641 "name": "BaseBdev1", 00:15:56.641 "aliases": [ 00:15:56.641 "1e0a730a-b2dd-424e-941e-fc960f961367" 00:15:56.641 ], 00:15:56.641 "product_name": "Malloc disk", 00:15:56.641 "block_size": 512, 00:15:56.641 "num_blocks": 65536, 00:15:56.641 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:56.641 "assigned_rate_limits": { 00:15:56.641 "rw_ios_per_sec": 0, 00:15:56.641 "rw_mbytes_per_sec": 0, 00:15:56.641 "r_mbytes_per_sec": 0, 00:15:56.641 "w_mbytes_per_sec": 0 00:15:56.641 }, 00:15:56.641 "claimed": true, 00:15:56.641 "claim_type": "exclusive_write", 00:15:56.641 "zoned": false, 00:15:56.641 "supported_io_types": { 00:15:56.641 "read": true, 00:15:56.641 "write": true, 00:15:56.641 "unmap": true, 00:15:56.641 "flush": true, 00:15:56.641 "reset": true, 00:15:56.641 "nvme_admin": false, 00:15:56.641 "nvme_io": false, 00:15:56.641 "nvme_io_md": false, 00:15:56.641 "write_zeroes": true, 00:15:56.641 "zcopy": true, 00:15:56.641 "get_zone_info": false, 00:15:56.641 "zone_management": false, 00:15:56.641 "zone_append": false, 00:15:56.641 "compare": false, 00:15:56.641 "compare_and_write": false, 00:15:56.641 "abort": true, 00:15:56.641 "seek_hole": false, 00:15:56.641 "seek_data": false, 00:15:56.641 "copy": true, 00:15:56.641 "nvme_iov_md": false 00:15:56.641 }, 00:15:56.641 "memory_domains": [ 00:15:56.641 { 00:15:56.641 "dma_device_id": "system", 00:15:56.641 "dma_device_type": 1 00:15:56.641 }, 00:15:56.641 { 00:15:56.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.641 "dma_device_type": 2 00:15:56.641 } 00:15:56.641 ], 00:15:56.641 "driver_specific": {} 00:15:56.641 } 00:15:56.641 ] 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.641 "name": "Existed_Raid", 00:15:56.641 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:56.641 "strip_size_kb": 64, 00:15:56.641 "state": "configuring", 00:15:56.641 "raid_level": "raid5f", 00:15:56.641 "superblock": true, 00:15:56.641 "num_base_bdevs": 4, 00:15:56.641 "num_base_bdevs_discovered": 3, 00:15:56.641 "num_base_bdevs_operational": 4, 00:15:56.641 "base_bdevs_list": [ 00:15:56.641 { 00:15:56.641 "name": "BaseBdev1", 00:15:56.641 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:56.641 "is_configured": true, 00:15:56.641 "data_offset": 2048, 00:15:56.641 "data_size": 63488 00:15:56.641 }, 00:15:56.641 { 00:15:56.641 "name": null, 00:15:56.641 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:56.641 "is_configured": false, 00:15:56.641 "data_offset": 0, 00:15:56.641 "data_size": 63488 00:15:56.641 }, 00:15:56.641 { 00:15:56.641 "name": "BaseBdev3", 00:15:56.641 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:56.641 "is_configured": true, 00:15:56.641 "data_offset": 2048, 00:15:56.641 "data_size": 63488 00:15:56.641 }, 00:15:56.641 { 00:15:56.641 "name": "BaseBdev4", 00:15:56.641 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:56.641 "is_configured": true, 00:15:56.641 "data_offset": 2048, 00:15:56.641 "data_size": 63488 00:15:56.641 } 00:15:56.641 ] 00:15:56.641 }' 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.641 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.900 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:56.900 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.900 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.900 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.159 [2024-12-12 19:44:39.763672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.159 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.159 "name": "Existed_Raid", 00:15:57.159 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:57.159 "strip_size_kb": 64, 00:15:57.159 "state": "configuring", 00:15:57.159 "raid_level": "raid5f", 00:15:57.159 "superblock": true, 00:15:57.159 "num_base_bdevs": 4, 00:15:57.159 "num_base_bdevs_discovered": 2, 00:15:57.159 "num_base_bdevs_operational": 4, 00:15:57.159 "base_bdevs_list": [ 00:15:57.159 { 00:15:57.159 "name": "BaseBdev1", 00:15:57.159 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:57.159 "is_configured": true, 00:15:57.159 "data_offset": 2048, 00:15:57.159 "data_size": 63488 00:15:57.159 }, 00:15:57.159 { 00:15:57.159 "name": null, 00:15:57.159 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:57.159 "is_configured": false, 00:15:57.159 "data_offset": 0, 00:15:57.159 "data_size": 63488 00:15:57.159 }, 00:15:57.159 { 00:15:57.159 "name": null, 00:15:57.159 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:57.159 "is_configured": false, 00:15:57.159 "data_offset": 0, 00:15:57.159 "data_size": 63488 00:15:57.159 }, 00:15:57.159 { 00:15:57.160 "name": "BaseBdev4", 00:15:57.160 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:57.160 "is_configured": true, 00:15:57.160 "data_offset": 2048, 00:15:57.160 "data_size": 63488 00:15:57.160 } 00:15:57.160 ] 00:15:57.160 }' 00:15:57.160 19:44:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.160 19:44:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.418 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.419 [2024-12-12 19:44:40.238829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.419 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.678 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.678 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.678 "name": "Existed_Raid", 00:15:57.678 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:57.678 "strip_size_kb": 64, 00:15:57.678 "state": "configuring", 00:15:57.678 "raid_level": "raid5f", 00:15:57.678 "superblock": true, 00:15:57.678 "num_base_bdevs": 4, 00:15:57.678 "num_base_bdevs_discovered": 3, 00:15:57.678 "num_base_bdevs_operational": 4, 00:15:57.678 "base_bdevs_list": [ 00:15:57.678 { 00:15:57.678 "name": "BaseBdev1", 00:15:57.678 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:57.678 "is_configured": true, 00:15:57.678 "data_offset": 2048, 00:15:57.678 "data_size": 63488 00:15:57.678 }, 00:15:57.678 { 00:15:57.678 "name": null, 00:15:57.678 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:57.678 "is_configured": false, 00:15:57.678 "data_offset": 0, 00:15:57.678 "data_size": 63488 00:15:57.678 }, 00:15:57.678 { 00:15:57.678 "name": "BaseBdev3", 00:15:57.678 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:57.678 "is_configured": true, 00:15:57.678 "data_offset": 2048, 00:15:57.678 "data_size": 63488 00:15:57.678 }, 00:15:57.678 { 00:15:57.678 "name": "BaseBdev4", 00:15:57.678 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:57.678 "is_configured": true, 00:15:57.678 "data_offset": 2048, 00:15:57.678 "data_size": 63488 00:15:57.678 } 00:15:57.678 ] 00:15:57.678 }' 00:15:57.678 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.678 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.938 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.938 [2024-12-12 19:44:40.754392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.197 "name": "Existed_Raid", 00:15:58.197 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:58.197 "strip_size_kb": 64, 00:15:58.197 "state": "configuring", 00:15:58.197 "raid_level": "raid5f", 00:15:58.197 "superblock": true, 00:15:58.197 "num_base_bdevs": 4, 00:15:58.197 "num_base_bdevs_discovered": 2, 00:15:58.197 "num_base_bdevs_operational": 4, 00:15:58.197 "base_bdevs_list": [ 00:15:58.197 { 00:15:58.197 "name": null, 00:15:58.197 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:58.197 "is_configured": false, 00:15:58.197 "data_offset": 0, 00:15:58.197 "data_size": 63488 00:15:58.197 }, 00:15:58.197 { 00:15:58.197 "name": null, 00:15:58.197 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:58.197 "is_configured": false, 00:15:58.197 "data_offset": 0, 00:15:58.197 "data_size": 63488 00:15:58.197 }, 00:15:58.197 { 00:15:58.197 "name": "BaseBdev3", 00:15:58.197 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:58.197 "is_configured": true, 00:15:58.197 "data_offset": 2048, 00:15:58.197 "data_size": 63488 00:15:58.197 }, 00:15:58.197 { 00:15:58.197 "name": "BaseBdev4", 00:15:58.197 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:58.197 "is_configured": true, 00:15:58.197 "data_offset": 2048, 00:15:58.197 "data_size": 63488 00:15:58.197 } 00:15:58.197 ] 00:15:58.197 }' 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.197 19:44:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.456 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:58.456 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.456 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.456 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.715 [2024-12-12 19:44:41.317957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.715 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.715 "name": "Existed_Raid", 00:15:58.715 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:58.715 "strip_size_kb": 64, 00:15:58.715 "state": "configuring", 00:15:58.715 "raid_level": "raid5f", 00:15:58.715 "superblock": true, 00:15:58.715 "num_base_bdevs": 4, 00:15:58.715 "num_base_bdevs_discovered": 3, 00:15:58.715 "num_base_bdevs_operational": 4, 00:15:58.715 "base_bdevs_list": [ 00:15:58.715 { 00:15:58.715 "name": null, 00:15:58.715 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:58.715 "is_configured": false, 00:15:58.715 "data_offset": 0, 00:15:58.715 "data_size": 63488 00:15:58.715 }, 00:15:58.715 { 00:15:58.715 "name": "BaseBdev2", 00:15:58.715 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:58.715 "is_configured": true, 00:15:58.715 "data_offset": 2048, 00:15:58.715 "data_size": 63488 00:15:58.715 }, 00:15:58.715 { 00:15:58.716 "name": "BaseBdev3", 00:15:58.716 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:58.716 "is_configured": true, 00:15:58.716 "data_offset": 2048, 00:15:58.716 "data_size": 63488 00:15:58.716 }, 00:15:58.716 { 00:15:58.716 "name": "BaseBdev4", 00:15:58.716 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:58.716 "is_configured": true, 00:15:58.716 "data_offset": 2048, 00:15:58.716 "data_size": 63488 00:15:58.716 } 00:15:58.716 ] 00:15:58.716 }' 00:15:58.716 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.716 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.975 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.975 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.975 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:58.975 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.975 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e0a730a-b2dd-424e-941e-fc960f961367 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.235 [2024-12-12 19:44:41.911244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:59.235 [2024-12-12 19:44:41.911461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.235 [2024-12-12 19:44:41.911473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.235 [2024-12-12 19:44:41.911745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:59.235 NewBaseBdev 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.235 [2024-12-12 19:44:41.918209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.235 [2024-12-12 19:44:41.918228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:59.235 [2024-12-12 19:44:41.918396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.235 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.235 [ 00:15:59.235 { 00:15:59.235 "name": "NewBaseBdev", 00:15:59.235 "aliases": [ 00:15:59.235 "1e0a730a-b2dd-424e-941e-fc960f961367" 00:15:59.235 ], 00:15:59.235 "product_name": "Malloc disk", 00:15:59.235 "block_size": 512, 00:15:59.235 "num_blocks": 65536, 00:15:59.235 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:59.235 "assigned_rate_limits": { 00:15:59.235 "rw_ios_per_sec": 0, 00:15:59.235 "rw_mbytes_per_sec": 0, 00:15:59.235 "r_mbytes_per_sec": 0, 00:15:59.235 "w_mbytes_per_sec": 0 00:15:59.235 }, 00:15:59.235 "claimed": true, 00:15:59.235 "claim_type": "exclusive_write", 00:15:59.235 "zoned": false, 00:15:59.235 "supported_io_types": { 00:15:59.235 "read": true, 00:15:59.235 "write": true, 00:15:59.235 "unmap": true, 00:15:59.235 "flush": true, 00:15:59.235 "reset": true, 00:15:59.235 "nvme_admin": false, 00:15:59.235 "nvme_io": false, 00:15:59.235 "nvme_io_md": false, 00:15:59.235 "write_zeroes": true, 00:15:59.235 "zcopy": true, 00:15:59.235 "get_zone_info": false, 00:15:59.235 "zone_management": false, 00:15:59.235 "zone_append": false, 00:15:59.235 "compare": false, 00:15:59.235 "compare_and_write": false, 00:15:59.235 "abort": true, 00:15:59.235 "seek_hole": false, 00:15:59.235 "seek_data": false, 00:15:59.235 "copy": true, 00:15:59.236 "nvme_iov_md": false 00:15:59.236 }, 00:15:59.236 "memory_domains": [ 00:15:59.236 { 00:15:59.236 "dma_device_id": "system", 00:15:59.236 "dma_device_type": 1 00:15:59.236 }, 00:15:59.236 { 00:15:59.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.236 "dma_device_type": 2 00:15:59.236 } 00:15:59.236 ], 00:15:59.236 "driver_specific": {} 00:15:59.236 } 00:15:59.236 ] 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.236 19:44:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.236 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.236 "name": "Existed_Raid", 00:15:59.236 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:59.236 "strip_size_kb": 64, 00:15:59.236 "state": "online", 00:15:59.236 "raid_level": "raid5f", 00:15:59.236 "superblock": true, 00:15:59.236 "num_base_bdevs": 4, 00:15:59.236 "num_base_bdevs_discovered": 4, 00:15:59.236 "num_base_bdevs_operational": 4, 00:15:59.236 "base_bdevs_list": [ 00:15:59.236 { 00:15:59.236 "name": "NewBaseBdev", 00:15:59.236 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:59.236 "is_configured": true, 00:15:59.236 "data_offset": 2048, 00:15:59.236 "data_size": 63488 00:15:59.236 }, 00:15:59.236 { 00:15:59.236 "name": "BaseBdev2", 00:15:59.236 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:59.236 "is_configured": true, 00:15:59.236 "data_offset": 2048, 00:15:59.236 "data_size": 63488 00:15:59.236 }, 00:15:59.236 { 00:15:59.236 "name": "BaseBdev3", 00:15:59.236 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:59.236 "is_configured": true, 00:15:59.236 "data_offset": 2048, 00:15:59.236 "data_size": 63488 00:15:59.236 }, 00:15:59.236 { 00:15:59.236 "name": "BaseBdev4", 00:15:59.236 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:59.236 "is_configured": true, 00:15:59.236 "data_offset": 2048, 00:15:59.236 "data_size": 63488 00:15:59.236 } 00:15:59.236 ] 00:15:59.236 }' 00:15:59.236 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.236 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.805 [2024-12-12 19:44:42.413339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.805 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.805 "name": "Existed_Raid", 00:15:59.805 "aliases": [ 00:15:59.805 "98a35cd9-9efc-43a1-9069-d2545ff2641d" 00:15:59.805 ], 00:15:59.805 "product_name": "Raid Volume", 00:15:59.805 "block_size": 512, 00:15:59.805 "num_blocks": 190464, 00:15:59.805 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:59.805 "assigned_rate_limits": { 00:15:59.805 "rw_ios_per_sec": 0, 00:15:59.805 "rw_mbytes_per_sec": 0, 00:15:59.805 "r_mbytes_per_sec": 0, 00:15:59.805 "w_mbytes_per_sec": 0 00:15:59.805 }, 00:15:59.805 "claimed": false, 00:15:59.805 "zoned": false, 00:15:59.805 "supported_io_types": { 00:15:59.805 "read": true, 00:15:59.805 "write": true, 00:15:59.805 "unmap": false, 00:15:59.805 "flush": false, 00:15:59.805 "reset": true, 00:15:59.805 "nvme_admin": false, 00:15:59.805 "nvme_io": false, 00:15:59.805 "nvme_io_md": false, 00:15:59.805 "write_zeroes": true, 00:15:59.806 "zcopy": false, 00:15:59.806 "get_zone_info": false, 00:15:59.806 "zone_management": false, 00:15:59.806 "zone_append": false, 00:15:59.806 "compare": false, 00:15:59.806 "compare_and_write": false, 00:15:59.806 "abort": false, 00:15:59.806 "seek_hole": false, 00:15:59.806 "seek_data": false, 00:15:59.806 "copy": false, 00:15:59.806 "nvme_iov_md": false 00:15:59.806 }, 00:15:59.806 "driver_specific": { 00:15:59.806 "raid": { 00:15:59.806 "uuid": "98a35cd9-9efc-43a1-9069-d2545ff2641d", 00:15:59.806 "strip_size_kb": 64, 00:15:59.806 "state": "online", 00:15:59.806 "raid_level": "raid5f", 00:15:59.806 "superblock": true, 00:15:59.806 "num_base_bdevs": 4, 00:15:59.806 "num_base_bdevs_discovered": 4, 00:15:59.806 "num_base_bdevs_operational": 4, 00:15:59.806 "base_bdevs_list": [ 00:15:59.806 { 00:15:59.806 "name": "NewBaseBdev", 00:15:59.806 "uuid": "1e0a730a-b2dd-424e-941e-fc960f961367", 00:15:59.806 "is_configured": true, 00:15:59.806 "data_offset": 2048, 00:15:59.806 "data_size": 63488 00:15:59.806 }, 00:15:59.806 { 00:15:59.806 "name": "BaseBdev2", 00:15:59.806 "uuid": "56cc6eb0-edbc-449e-a164-dee062ded7d7", 00:15:59.806 "is_configured": true, 00:15:59.806 "data_offset": 2048, 00:15:59.806 "data_size": 63488 00:15:59.806 }, 00:15:59.806 { 00:15:59.806 "name": "BaseBdev3", 00:15:59.806 "uuid": "026992af-98b8-45bb-9452-18b1797f2ebb", 00:15:59.806 "is_configured": true, 00:15:59.806 "data_offset": 2048, 00:15:59.806 "data_size": 63488 00:15:59.806 }, 00:15:59.806 { 00:15:59.806 "name": "BaseBdev4", 00:15:59.806 "uuid": "9d6dec0b-b0f5-4fb0-9c7d-263edbd1800d", 00:15:59.806 "is_configured": true, 00:15:59.806 "data_offset": 2048, 00:15:59.806 "data_size": 63488 00:15:59.806 } 00:15:59.806 ] 00:15:59.806 } 00:15:59.806 } 00:15:59.806 }' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:59.806 BaseBdev2 00:15:59.806 BaseBdev3 00:15:59.806 BaseBdev4' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.806 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.066 [2024-12-12 19:44:42.760566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.066 [2024-12-12 19:44:42.760590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.066 [2024-12-12 19:44:42.760653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.066 [2024-12-12 19:44:42.760934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.066 [2024-12-12 19:44:42.760944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.066 19:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85112 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85112 ']' 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 85112 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85112 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.067 killing process with pid 85112 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85112' 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 85112 00:16:00.067 [2024-12-12 19:44:42.805903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.067 19:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 85112 00:16:00.636 [2024-12-12 19:44:43.174434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.574 19:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.574 00:16:01.574 real 0m11.425s 00:16:01.574 user 0m18.209s 00:16:01.574 sys 0m2.087s 00:16:01.574 19:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.574 ************************************ 00:16:01.574 END TEST raid5f_state_function_test_sb 00:16:01.574 ************************************ 00:16:01.574 19:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.574 19:44:44 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:01.574 19:44:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:01.574 19:44:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.574 19:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.574 ************************************ 00:16:01.574 START TEST raid5f_superblock_test 00:16:01.574 ************************************ 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85779 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85779 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85779 ']' 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.574 19:44:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.835 [2024-12-12 19:44:44.421336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:01.835 [2024-12-12 19:44:44.421536] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85779 ] 00:16:01.835 [2024-12-12 19:44:44.599527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.095 [2024-12-12 19:44:44.706604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.095 [2024-12-12 19:44:44.890177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.095 [2024-12-12 19:44:44.890329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 malloc1 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 [2024-12-12 19:44:45.278631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.708 [2024-12-12 19:44:45.278737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.708 [2024-12-12 19:44:45.278773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.708 [2024-12-12 19:44:45.278800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.708 [2024-12-12 19:44:45.280755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.708 [2024-12-12 19:44:45.280822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.708 pt1 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 malloc2 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 [2024-12-12 19:44:45.333145] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.708 [2024-12-12 19:44:45.333232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.708 [2024-12-12 19:44:45.333269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.708 [2024-12-12 19:44:45.333294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.708 [2024-12-12 19:44:45.335294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.708 [2024-12-12 19:44:45.335363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.708 pt2 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 malloc3 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 [2024-12-12 19:44:45.426215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:02.708 [2024-12-12 19:44:45.426322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.708 [2024-12-12 19:44:45.426360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.708 [2024-12-12 19:44:45.426388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.708 [2024-12-12 19:44:45.428285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.708 pt3 00:16:02.708 [2024-12-12 19:44:45.428355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:02.708 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 malloc4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 [2024-12-12 19:44:45.481221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.709 [2024-12-12 19:44:45.481310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.709 [2024-12-12 19:44:45.481347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.709 [2024-12-12 19:44:45.481374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.709 [2024-12-12 19:44:45.483330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.709 [2024-12-12 19:44:45.483398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.709 pt4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 [2024-12-12 19:44:45.493237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.709 [2024-12-12 19:44:45.494912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.709 [2024-12-12 19:44:45.495030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:02.709 [2024-12-12 19:44:45.495097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.709 [2024-12-12 19:44:45.495331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.709 [2024-12-12 19:44:45.495380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.709 [2024-12-12 19:44:45.495669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:02.709 [2024-12-12 19:44:45.502061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.709 [2024-12-12 19:44:45.502117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.709 [2024-12-12 19:44:45.502364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.969 "name": "raid_bdev1", 00:16:02.969 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:02.969 "strip_size_kb": 64, 00:16:02.969 "state": "online", 00:16:02.969 "raid_level": "raid5f", 00:16:02.969 "superblock": true, 00:16:02.969 "num_base_bdevs": 4, 00:16:02.969 "num_base_bdevs_discovered": 4, 00:16:02.969 "num_base_bdevs_operational": 4, 00:16:02.969 "base_bdevs_list": [ 00:16:02.969 { 00:16:02.969 "name": "pt1", 00:16:02.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.969 "is_configured": true, 00:16:02.969 "data_offset": 2048, 00:16:02.969 "data_size": 63488 00:16:02.969 }, 00:16:02.969 { 00:16:02.969 "name": "pt2", 00:16:02.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.969 "is_configured": true, 00:16:02.969 "data_offset": 2048, 00:16:02.969 "data_size": 63488 00:16:02.969 }, 00:16:02.969 { 00:16:02.969 "name": "pt3", 00:16:02.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.969 "is_configured": true, 00:16:02.969 "data_offset": 2048, 00:16:02.969 "data_size": 63488 00:16:02.969 }, 00:16:02.969 { 00:16:02.969 "name": "pt4", 00:16:02.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.969 "is_configured": true, 00:16:02.969 "data_offset": 2048, 00:16:02.969 "data_size": 63488 00:16:02.969 } 00:16:02.969 ] 00:16:02.969 }' 00:16:02.969 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.969 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 [2024-12-12 19:44:45.953308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.229 "name": "raid_bdev1", 00:16:03.229 "aliases": [ 00:16:03.229 "f71f4da1-20ad-4446-96b2-c68a9f3c0197" 00:16:03.229 ], 00:16:03.229 "product_name": "Raid Volume", 00:16:03.229 "block_size": 512, 00:16:03.229 "num_blocks": 190464, 00:16:03.229 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:03.229 "assigned_rate_limits": { 00:16:03.229 "rw_ios_per_sec": 0, 00:16:03.229 "rw_mbytes_per_sec": 0, 00:16:03.229 "r_mbytes_per_sec": 0, 00:16:03.229 "w_mbytes_per_sec": 0 00:16:03.229 }, 00:16:03.229 "claimed": false, 00:16:03.229 "zoned": false, 00:16:03.229 "supported_io_types": { 00:16:03.229 "read": true, 00:16:03.229 "write": true, 00:16:03.229 "unmap": false, 00:16:03.229 "flush": false, 00:16:03.229 "reset": true, 00:16:03.229 "nvme_admin": false, 00:16:03.229 "nvme_io": false, 00:16:03.229 "nvme_io_md": false, 00:16:03.229 "write_zeroes": true, 00:16:03.229 "zcopy": false, 00:16:03.229 "get_zone_info": false, 00:16:03.229 "zone_management": false, 00:16:03.229 "zone_append": false, 00:16:03.229 "compare": false, 00:16:03.229 "compare_and_write": false, 00:16:03.229 "abort": false, 00:16:03.229 "seek_hole": false, 00:16:03.229 "seek_data": false, 00:16:03.229 "copy": false, 00:16:03.229 "nvme_iov_md": false 00:16:03.229 }, 00:16:03.229 "driver_specific": { 00:16:03.229 "raid": { 00:16:03.229 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:03.229 "strip_size_kb": 64, 00:16:03.229 "state": "online", 00:16:03.229 "raid_level": "raid5f", 00:16:03.229 "superblock": true, 00:16:03.229 "num_base_bdevs": 4, 00:16:03.229 "num_base_bdevs_discovered": 4, 00:16:03.229 "num_base_bdevs_operational": 4, 00:16:03.229 "base_bdevs_list": [ 00:16:03.229 { 00:16:03.229 "name": "pt1", 00:16:03.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.229 "is_configured": true, 00:16:03.229 "data_offset": 2048, 00:16:03.229 "data_size": 63488 00:16:03.229 }, 00:16:03.229 { 00:16:03.229 "name": "pt2", 00:16:03.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.229 "is_configured": true, 00:16:03.229 "data_offset": 2048, 00:16:03.229 "data_size": 63488 00:16:03.229 }, 00:16:03.229 { 00:16:03.229 "name": "pt3", 00:16:03.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.229 "is_configured": true, 00:16:03.229 "data_offset": 2048, 00:16:03.229 "data_size": 63488 00:16:03.229 }, 00:16:03.229 { 00:16:03.229 "name": "pt4", 00:16:03.229 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.229 "is_configured": true, 00:16:03.229 "data_offset": 2048, 00:16:03.229 "data_size": 63488 00:16:03.229 } 00:16:03.229 ] 00:16:03.229 } 00:16:03.229 } 00:16:03.229 }' 00:16:03.229 19:44:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.229 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:03.229 pt2 00:16:03.229 pt3 00:16:03.229 pt4' 00:16:03.229 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 [2024-12-12 19:44:46.300758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.489 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f71f4da1-20ad-4446-96b2-c68a9f3c0197 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f71f4da1-20ad-4446-96b2-c68a9f3c0197 ']' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 [2024-12-12 19:44:46.344509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.750 [2024-12-12 19:44:46.344591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.750 [2024-12-12 19:44:46.344689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.750 [2024-12-12 19:44:46.344781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.750 [2024-12-12 19:44:46.344841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 [2024-12-12 19:44:46.508250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:03.750 [2024-12-12 19:44:46.509990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:03.750 [2024-12-12 19:44:46.510074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:03.750 [2024-12-12 19:44:46.510124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:03.750 [2024-12-12 19:44:46.510190] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:03.750 [2024-12-12 19:44:46.510277] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:03.750 [2024-12-12 19:44:46.510356] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:03.750 [2024-12-12 19:44:46.510423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:03.750 [2024-12-12 19:44:46.510480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.750 [2024-12-12 19:44:46.510515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:03.750 request: 00:16:03.750 { 00:16:03.750 "name": "raid_bdev1", 00:16:03.750 "raid_level": "raid5f", 00:16:03.750 "base_bdevs": [ 00:16:03.750 "malloc1", 00:16:03.750 "malloc2", 00:16:03.750 "malloc3", 00:16:03.750 "malloc4" 00:16:03.750 ], 00:16:03.750 "strip_size_kb": 64, 00:16:03.750 "superblock": false, 00:16:03.750 "method": "bdev_raid_create", 00:16:03.750 "req_id": 1 00:16:03.750 } 00:16:03.750 Got JSON-RPC error response 00:16:03.750 response: 00:16:03.750 { 00:16:03.750 "code": -17, 00:16:03.750 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:03.750 } 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.750 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 [2024-12-12 19:44:46.576107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.751 [2024-12-12 19:44:46.576189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.751 [2024-12-12 19:44:46.576220] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.751 [2024-12-12 19:44:46.576247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.751 [2024-12-12 19:44:46.578288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.751 [2024-12-12 19:44:46.578368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.751 [2024-12-12 19:44:46.578456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.751 [2024-12-12 19:44:46.578521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.751 pt1 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.010 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.010 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.010 "name": "raid_bdev1", 00:16:04.010 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:04.010 "strip_size_kb": 64, 00:16:04.010 "state": "configuring", 00:16:04.010 "raid_level": "raid5f", 00:16:04.010 "superblock": true, 00:16:04.010 "num_base_bdevs": 4, 00:16:04.010 "num_base_bdevs_discovered": 1, 00:16:04.010 "num_base_bdevs_operational": 4, 00:16:04.010 "base_bdevs_list": [ 00:16:04.010 { 00:16:04.010 "name": "pt1", 00:16:04.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.010 "is_configured": true, 00:16:04.010 "data_offset": 2048, 00:16:04.010 "data_size": 63488 00:16:04.010 }, 00:16:04.010 { 00:16:04.010 "name": null, 00:16:04.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.010 "is_configured": false, 00:16:04.010 "data_offset": 2048, 00:16:04.010 "data_size": 63488 00:16:04.010 }, 00:16:04.010 { 00:16:04.010 "name": null, 00:16:04.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.010 "is_configured": false, 00:16:04.010 "data_offset": 2048, 00:16:04.010 "data_size": 63488 00:16:04.010 }, 00:16:04.010 { 00:16:04.010 "name": null, 00:16:04.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.010 "is_configured": false, 00:16:04.010 "data_offset": 2048, 00:16:04.010 "data_size": 63488 00:16:04.010 } 00:16:04.010 ] 00:16:04.010 }' 00:16:04.010 19:44:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.010 19:44:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.270 [2024-12-12 19:44:47.015377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.270 [2024-12-12 19:44:47.015471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.270 [2024-12-12 19:44:47.015504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:04.270 [2024-12-12 19:44:47.015516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.270 [2024-12-12 19:44:47.015958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.270 [2024-12-12 19:44:47.015979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.270 [2024-12-12 19:44:47.016043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.270 [2024-12-12 19:44:47.016063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.270 pt2 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.270 [2024-12-12 19:44:47.027369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.270 "name": "raid_bdev1", 00:16:04.270 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:04.270 "strip_size_kb": 64, 00:16:04.270 "state": "configuring", 00:16:04.270 "raid_level": "raid5f", 00:16:04.270 "superblock": true, 00:16:04.270 "num_base_bdevs": 4, 00:16:04.270 "num_base_bdevs_discovered": 1, 00:16:04.270 "num_base_bdevs_operational": 4, 00:16:04.270 "base_bdevs_list": [ 00:16:04.270 { 00:16:04.270 "name": "pt1", 00:16:04.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.270 "is_configured": true, 00:16:04.270 "data_offset": 2048, 00:16:04.270 "data_size": 63488 00:16:04.270 }, 00:16:04.270 { 00:16:04.270 "name": null, 00:16:04.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.270 "is_configured": false, 00:16:04.270 "data_offset": 0, 00:16:04.270 "data_size": 63488 00:16:04.270 }, 00:16:04.270 { 00:16:04.270 "name": null, 00:16:04.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.270 "is_configured": false, 00:16:04.270 "data_offset": 2048, 00:16:04.270 "data_size": 63488 00:16:04.270 }, 00:16:04.270 { 00:16:04.270 "name": null, 00:16:04.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.270 "is_configured": false, 00:16:04.270 "data_offset": 2048, 00:16:04.270 "data_size": 63488 00:16:04.270 } 00:16:04.270 ] 00:16:04.270 }' 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.270 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.840 [2024-12-12 19:44:47.510529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.840 [2024-12-12 19:44:47.510627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.840 [2024-12-12 19:44:47.510662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:04.840 [2024-12-12 19:44:47.510689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.840 [2024-12-12 19:44:47.511103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.840 [2024-12-12 19:44:47.511157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.840 [2024-12-12 19:44:47.511266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.840 [2024-12-12 19:44:47.511314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.840 pt2 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.840 [2024-12-12 19:44:47.522500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.840 [2024-12-12 19:44:47.522594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.840 [2024-12-12 19:44:47.522628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:04.840 [2024-12-12 19:44:47.522655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.840 [2024-12-12 19:44:47.523023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.840 [2024-12-12 19:44:47.523080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.840 [2024-12-12 19:44:47.523182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:04.840 [2024-12-12 19:44:47.523234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.840 pt3 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.840 [2024-12-12 19:44:47.534460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:04.840 [2024-12-12 19:44:47.534532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.840 [2024-12-12 19:44:47.534571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.840 [2024-12-12 19:44:47.534596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.840 [2024-12-12 19:44:47.534972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.840 [2024-12-12 19:44:47.535024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:04.840 [2024-12-12 19:44:47.535113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:04.840 [2024-12-12 19:44:47.535158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:04.840 [2024-12-12 19:44:47.535318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:04.840 [2024-12-12 19:44:47.535355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.840 [2024-12-12 19:44:47.535610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:04.840 [2024-12-12 19:44:47.542007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:04.840 [2024-12-12 19:44:47.542066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:04.840 [2024-12-12 19:44:47.542286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.840 pt4 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.840 "name": "raid_bdev1", 00:16:04.840 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:04.840 "strip_size_kb": 64, 00:16:04.840 "state": "online", 00:16:04.840 "raid_level": "raid5f", 00:16:04.840 "superblock": true, 00:16:04.840 "num_base_bdevs": 4, 00:16:04.840 "num_base_bdevs_discovered": 4, 00:16:04.840 "num_base_bdevs_operational": 4, 00:16:04.840 "base_bdevs_list": [ 00:16:04.840 { 00:16:04.840 "name": "pt1", 00:16:04.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.840 "is_configured": true, 00:16:04.840 "data_offset": 2048, 00:16:04.840 "data_size": 63488 00:16:04.840 }, 00:16:04.840 { 00:16:04.840 "name": "pt2", 00:16:04.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.840 "is_configured": true, 00:16:04.840 "data_offset": 2048, 00:16:04.840 "data_size": 63488 00:16:04.840 }, 00:16:04.840 { 00:16:04.840 "name": "pt3", 00:16:04.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.840 "is_configured": true, 00:16:04.840 "data_offset": 2048, 00:16:04.840 "data_size": 63488 00:16:04.840 }, 00:16:04.840 { 00:16:04.840 "name": "pt4", 00:16:04.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.840 "is_configured": true, 00:16:04.840 "data_offset": 2048, 00:16:04.840 "data_size": 63488 00:16:04.840 } 00:16:04.840 ] 00:16:04.840 }' 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.840 19:44:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.410 [2024-12-12 19:44:48.014528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.410 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:05.410 "name": "raid_bdev1", 00:16:05.410 "aliases": [ 00:16:05.410 "f71f4da1-20ad-4446-96b2-c68a9f3c0197" 00:16:05.410 ], 00:16:05.410 "product_name": "Raid Volume", 00:16:05.410 "block_size": 512, 00:16:05.410 "num_blocks": 190464, 00:16:05.410 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:05.410 "assigned_rate_limits": { 00:16:05.410 "rw_ios_per_sec": 0, 00:16:05.410 "rw_mbytes_per_sec": 0, 00:16:05.410 "r_mbytes_per_sec": 0, 00:16:05.410 "w_mbytes_per_sec": 0 00:16:05.410 }, 00:16:05.410 "claimed": false, 00:16:05.410 "zoned": false, 00:16:05.410 "supported_io_types": { 00:16:05.410 "read": true, 00:16:05.410 "write": true, 00:16:05.410 "unmap": false, 00:16:05.410 "flush": false, 00:16:05.410 "reset": true, 00:16:05.410 "nvme_admin": false, 00:16:05.410 "nvme_io": false, 00:16:05.410 "nvme_io_md": false, 00:16:05.410 "write_zeroes": true, 00:16:05.410 "zcopy": false, 00:16:05.410 "get_zone_info": false, 00:16:05.410 "zone_management": false, 00:16:05.410 "zone_append": false, 00:16:05.410 "compare": false, 00:16:05.410 "compare_and_write": false, 00:16:05.410 "abort": false, 00:16:05.410 "seek_hole": false, 00:16:05.410 "seek_data": false, 00:16:05.410 "copy": false, 00:16:05.410 "nvme_iov_md": false 00:16:05.410 }, 00:16:05.410 "driver_specific": { 00:16:05.410 "raid": { 00:16:05.410 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:05.410 "strip_size_kb": 64, 00:16:05.410 "state": "online", 00:16:05.410 "raid_level": "raid5f", 00:16:05.410 "superblock": true, 00:16:05.410 "num_base_bdevs": 4, 00:16:05.410 "num_base_bdevs_discovered": 4, 00:16:05.410 "num_base_bdevs_operational": 4, 00:16:05.410 "base_bdevs_list": [ 00:16:05.410 { 00:16:05.410 "name": "pt1", 00:16:05.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.410 "is_configured": true, 00:16:05.410 "data_offset": 2048, 00:16:05.410 "data_size": 63488 00:16:05.410 }, 00:16:05.410 { 00:16:05.410 "name": "pt2", 00:16:05.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.411 "is_configured": true, 00:16:05.411 "data_offset": 2048, 00:16:05.411 "data_size": 63488 00:16:05.411 }, 00:16:05.411 { 00:16:05.411 "name": "pt3", 00:16:05.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.411 "is_configured": true, 00:16:05.411 "data_offset": 2048, 00:16:05.411 "data_size": 63488 00:16:05.411 }, 00:16:05.411 { 00:16:05.411 "name": "pt4", 00:16:05.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.411 "is_configured": true, 00:16:05.411 "data_offset": 2048, 00:16:05.411 "data_size": 63488 00:16:05.411 } 00:16:05.411 ] 00:16:05.411 } 00:16:05.411 } 00:16:05.411 }' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:05.411 pt2 00:16:05.411 pt3 00:16:05.411 pt4' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.411 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 [2024-12-12 19:44:48.350061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f71f4da1-20ad-4446-96b2-c68a9f3c0197 '!=' f71f4da1-20ad-4446-96b2-c68a9f3c0197 ']' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 [2024-12-12 19:44:48.397877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.671 "name": "raid_bdev1", 00:16:05.671 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:05.671 "strip_size_kb": 64, 00:16:05.671 "state": "online", 00:16:05.671 "raid_level": "raid5f", 00:16:05.671 "superblock": true, 00:16:05.671 "num_base_bdevs": 4, 00:16:05.671 "num_base_bdevs_discovered": 3, 00:16:05.671 "num_base_bdevs_operational": 3, 00:16:05.671 "base_bdevs_list": [ 00:16:05.671 { 00:16:05.671 "name": null, 00:16:05.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.671 "is_configured": false, 00:16:05.671 "data_offset": 0, 00:16:05.671 "data_size": 63488 00:16:05.671 }, 00:16:05.671 { 00:16:05.671 "name": "pt2", 00:16:05.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.671 "is_configured": true, 00:16:05.671 "data_offset": 2048, 00:16:05.671 "data_size": 63488 00:16:05.671 }, 00:16:05.671 { 00:16:05.671 "name": "pt3", 00:16:05.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.671 "is_configured": true, 00:16:05.671 "data_offset": 2048, 00:16:05.671 "data_size": 63488 00:16:05.671 }, 00:16:05.671 { 00:16:05.671 "name": "pt4", 00:16:05.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.671 "is_configured": true, 00:16:05.671 "data_offset": 2048, 00:16:05.671 "data_size": 63488 00:16:05.671 } 00:16:05.671 ] 00:16:05.671 }' 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.671 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 [2024-12-12 19:44:48.825101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.241 [2024-12-12 19:44:48.825164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.241 [2024-12-12 19:44:48.825240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.241 [2024-12-12 19:44:48.825320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.241 [2024-12-12 19:44:48.825390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 [2024-12-12 19:44:48.920929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:06.241 [2024-12-12 19:44:48.921011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.241 [2024-12-12 19:44:48.921042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:06.241 [2024-12-12 19:44:48.921069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.241 [2024-12-12 19:44:48.923066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.241 pt2 00:16:06.241 [2024-12-12 19:44:48.923158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:06.241 [2024-12-12 19:44:48.923247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:06.241 [2024-12-12 19:44:48.923297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.241 "name": "raid_bdev1", 00:16:06.241 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:06.241 "strip_size_kb": 64, 00:16:06.241 "state": "configuring", 00:16:06.241 "raid_level": "raid5f", 00:16:06.241 "superblock": true, 00:16:06.241 "num_base_bdevs": 4, 00:16:06.242 "num_base_bdevs_discovered": 1, 00:16:06.242 "num_base_bdevs_operational": 3, 00:16:06.242 "base_bdevs_list": [ 00:16:06.242 { 00:16:06.242 "name": null, 00:16:06.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.242 "is_configured": false, 00:16:06.242 "data_offset": 2048, 00:16:06.242 "data_size": 63488 00:16:06.242 }, 00:16:06.242 { 00:16:06.242 "name": "pt2", 00:16:06.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.242 "is_configured": true, 00:16:06.242 "data_offset": 2048, 00:16:06.242 "data_size": 63488 00:16:06.242 }, 00:16:06.242 { 00:16:06.242 "name": null, 00:16:06.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.242 "is_configured": false, 00:16:06.242 "data_offset": 2048, 00:16:06.242 "data_size": 63488 00:16:06.242 }, 00:16:06.242 { 00:16:06.242 "name": null, 00:16:06.242 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.242 "is_configured": false, 00:16:06.242 "data_offset": 2048, 00:16:06.242 "data_size": 63488 00:16:06.242 } 00:16:06.242 ] 00:16:06.242 }' 00:16:06.242 19:44:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.242 19:44:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.811 [2024-12-12 19:44:49.416090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.811 [2024-12-12 19:44:49.416192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.811 [2024-12-12 19:44:49.416231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:06.811 [2024-12-12 19:44:49.416258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.811 [2024-12-12 19:44:49.416672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.811 [2024-12-12 19:44:49.416724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.811 [2024-12-12 19:44:49.416828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:06.811 [2024-12-12 19:44:49.416874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.811 pt3 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.811 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.812 "name": "raid_bdev1", 00:16:06.812 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:06.812 "strip_size_kb": 64, 00:16:06.812 "state": "configuring", 00:16:06.812 "raid_level": "raid5f", 00:16:06.812 "superblock": true, 00:16:06.812 "num_base_bdevs": 4, 00:16:06.812 "num_base_bdevs_discovered": 2, 00:16:06.812 "num_base_bdevs_operational": 3, 00:16:06.812 "base_bdevs_list": [ 00:16:06.812 { 00:16:06.812 "name": null, 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.812 "is_configured": false, 00:16:06.812 "data_offset": 2048, 00:16:06.812 "data_size": 63488 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": "pt2", 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 2048, 00:16:06.812 "data_size": 63488 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": "pt3", 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 2048, 00:16:06.812 "data_size": 63488 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": null, 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.812 "is_configured": false, 00:16:06.812 "data_offset": 2048, 00:16:06.812 "data_size": 63488 00:16:06.812 } 00:16:06.812 ] 00:16:06.812 }' 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.812 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.072 [2024-12-12 19:44:49.811449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.072 [2024-12-12 19:44:49.811536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.072 [2024-12-12 19:44:49.811586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:07.072 [2024-12-12 19:44:49.811613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.072 [2024-12-12 19:44:49.812043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.072 [2024-12-12 19:44:49.812097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.072 [2024-12-12 19:44:49.812202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:07.072 [2024-12-12 19:44:49.812259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.072 [2024-12-12 19:44:49.812434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:07.072 [2024-12-12 19:44:49.812471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:07.072 [2024-12-12 19:44:49.812761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:07.072 [2024-12-12 19:44:49.819403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:07.072 [2024-12-12 19:44:49.819463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:07.072 [2024-12-12 19:44:49.819794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.072 pt4 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.072 "name": "raid_bdev1", 00:16:07.072 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:07.072 "strip_size_kb": 64, 00:16:07.072 "state": "online", 00:16:07.072 "raid_level": "raid5f", 00:16:07.072 "superblock": true, 00:16:07.072 "num_base_bdevs": 4, 00:16:07.072 "num_base_bdevs_discovered": 3, 00:16:07.072 "num_base_bdevs_operational": 3, 00:16:07.072 "base_bdevs_list": [ 00:16:07.072 { 00:16:07.072 "name": null, 00:16:07.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.072 "is_configured": false, 00:16:07.072 "data_offset": 2048, 00:16:07.072 "data_size": 63488 00:16:07.072 }, 00:16:07.072 { 00:16:07.072 "name": "pt2", 00:16:07.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.072 "is_configured": true, 00:16:07.072 "data_offset": 2048, 00:16:07.072 "data_size": 63488 00:16:07.072 }, 00:16:07.072 { 00:16:07.072 "name": "pt3", 00:16:07.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.072 "is_configured": true, 00:16:07.072 "data_offset": 2048, 00:16:07.072 "data_size": 63488 00:16:07.072 }, 00:16:07.072 { 00:16:07.072 "name": "pt4", 00:16:07.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.072 "is_configured": true, 00:16:07.072 "data_offset": 2048, 00:16:07.072 "data_size": 63488 00:16:07.072 } 00:16:07.072 ] 00:16:07.072 }' 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.072 19:44:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 [2024-12-12 19:44:50.239110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.641 [2024-12-12 19:44:50.239170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.641 [2024-12-12 19:44:50.239246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.641 [2024-12-12 19:44:50.239325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.641 [2024-12-12 19:44:50.239394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 [2024-12-12 19:44:50.307002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.641 [2024-12-12 19:44:50.307092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.641 [2024-12-12 19:44:50.307133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:07.641 [2024-12-12 19:44:50.307164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.641 [2024-12-12 19:44:50.309266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.641 [2024-12-12 19:44:50.309339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.641 [2024-12-12 19:44:50.309430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.641 [2024-12-12 19:44:50.309490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.641 [2024-12-12 19:44:50.309686] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.641 [2024-12-12 19:44:50.309748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.641 [2024-12-12 19:44:50.309793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:07.641 [2024-12-12 19:44:50.309907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.641 [2024-12-12 19:44:50.310035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.641 pt1 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.641 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.642 "name": "raid_bdev1", 00:16:07.642 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:07.642 "strip_size_kb": 64, 00:16:07.642 "state": "configuring", 00:16:07.642 "raid_level": "raid5f", 00:16:07.642 "superblock": true, 00:16:07.642 "num_base_bdevs": 4, 00:16:07.642 "num_base_bdevs_discovered": 2, 00:16:07.642 "num_base_bdevs_operational": 3, 00:16:07.642 "base_bdevs_list": [ 00:16:07.642 { 00:16:07.642 "name": null, 00:16:07.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.642 "is_configured": false, 00:16:07.642 "data_offset": 2048, 00:16:07.642 "data_size": 63488 00:16:07.642 }, 00:16:07.642 { 00:16:07.642 "name": "pt2", 00:16:07.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.642 "is_configured": true, 00:16:07.642 "data_offset": 2048, 00:16:07.642 "data_size": 63488 00:16:07.642 }, 00:16:07.642 { 00:16:07.642 "name": "pt3", 00:16:07.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.642 "is_configured": true, 00:16:07.642 "data_offset": 2048, 00:16:07.642 "data_size": 63488 00:16:07.642 }, 00:16:07.642 { 00:16:07.642 "name": null, 00:16:07.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.642 "is_configured": false, 00:16:07.642 "data_offset": 2048, 00:16:07.642 "data_size": 63488 00:16:07.642 } 00:16:07.642 ] 00:16:07.642 }' 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.642 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 [2024-12-12 19:44:50.846360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:08.211 [2024-12-12 19:44:50.846449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.211 [2024-12-12 19:44:50.846484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:08.211 [2024-12-12 19:44:50.846512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.211 [2024-12-12 19:44:50.846962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.211 [2024-12-12 19:44:50.847018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:08.211 [2024-12-12 19:44:50.847132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:08.211 [2024-12-12 19:44:50.847182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:08.211 [2024-12-12 19:44:50.847364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:08.211 [2024-12-12 19:44:50.847401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:08.211 [2024-12-12 19:44:50.847693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:08.211 [2024-12-12 19:44:50.854511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:08.211 [2024-12-12 19:44:50.854586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:08.211 [2024-12-12 19:44:50.854894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.211 pt4 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.211 "name": "raid_bdev1", 00:16:08.211 "uuid": "f71f4da1-20ad-4446-96b2-c68a9f3c0197", 00:16:08.211 "strip_size_kb": 64, 00:16:08.211 "state": "online", 00:16:08.211 "raid_level": "raid5f", 00:16:08.211 "superblock": true, 00:16:08.211 "num_base_bdevs": 4, 00:16:08.211 "num_base_bdevs_discovered": 3, 00:16:08.211 "num_base_bdevs_operational": 3, 00:16:08.211 "base_bdevs_list": [ 00:16:08.211 { 00:16:08.211 "name": null, 00:16:08.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.211 "is_configured": false, 00:16:08.211 "data_offset": 2048, 00:16:08.211 "data_size": 63488 00:16:08.211 }, 00:16:08.211 { 00:16:08.211 "name": "pt2", 00:16:08.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.211 "is_configured": true, 00:16:08.211 "data_offset": 2048, 00:16:08.211 "data_size": 63488 00:16:08.211 }, 00:16:08.211 { 00:16:08.211 "name": "pt3", 00:16:08.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.211 "is_configured": true, 00:16:08.211 "data_offset": 2048, 00:16:08.211 "data_size": 63488 00:16:08.211 }, 00:16:08.211 { 00:16:08.211 "name": "pt4", 00:16:08.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.211 "is_configured": true, 00:16:08.211 "data_offset": 2048, 00:16:08.211 "data_size": 63488 00:16:08.211 } 00:16:08.211 ] 00:16:08.211 }' 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.211 19:44:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.470 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:08.470 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.470 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.470 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.470 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.730 [2024-12-12 19:44:51.334626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f71f4da1-20ad-4446-96b2-c68a9f3c0197 '!=' f71f4da1-20ad-4446-96b2-c68a9f3c0197 ']' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85779 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85779 ']' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85779 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85779 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.730 killing process with pid 85779 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85779' 00:16:08.730 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 85779 00:16:08.730 [2024-12-12 19:44:51.415483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.731 [2024-12-12 19:44:51.415565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.731 [2024-12-12 19:44:51.415635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.731 [2024-12-12 19:44:51.415648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:08.731 19:44:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 85779 00:16:08.990 [2024-12-12 19:44:51.787297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.367 19:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:10.367 00:16:10.367 real 0m8.527s 00:16:10.367 user 0m13.408s 00:16:10.367 sys 0m1.646s 00:16:10.367 ************************************ 00:16:10.367 END TEST raid5f_superblock_test 00:16:10.367 ************************************ 00:16:10.367 19:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.367 19:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.367 19:44:52 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:10.367 19:44:52 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:10.367 19:44:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:10.367 19:44:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.367 19:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.367 ************************************ 00:16:10.367 START TEST raid5f_rebuild_test 00:16:10.367 ************************************ 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:10.367 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86271 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86271 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86271 ']' 00:16:10.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.368 19:44:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.368 Zero copy mechanism will not be used. 00:16:10.368 [2024-12-12 19:44:53.027152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:10.368 [2024-12-12 19:44:53.027254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86271 ] 00:16:10.368 [2024-12-12 19:44:53.198971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.627 [2024-12-12 19:44:53.306444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.886 [2024-12-12 19:44:53.504005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.886 [2024-12-12 19:44:53.504062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.146 BaseBdev1_malloc 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.146 [2024-12-12 19:44:53.908768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.146 [2024-12-12 19:44:53.908887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.146 [2024-12-12 19:44:53.908926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.146 [2024-12-12 19:44:53.908956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.146 [2024-12-12 19:44:53.911053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.146 [2024-12-12 19:44:53.911141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.146 BaseBdev1 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.146 BaseBdev2_malloc 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.146 [2024-12-12 19:44:53.962504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:11.146 [2024-12-12 19:44:53.962612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.146 [2024-12-12 19:44:53.962635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.146 [2024-12-12 19:44:53.962647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.146 [2024-12-12 19:44:53.964596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.146 [2024-12-12 19:44:53.964631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.146 BaseBdev2 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.146 19:44:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 BaseBdev3_malloc 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 [2024-12-12 19:44:54.023073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:11.406 [2024-12-12 19:44:54.023175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.406 [2024-12-12 19:44:54.023213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:11.406 [2024-12-12 19:44:54.023243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.406 [2024-12-12 19:44:54.025296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.406 [2024-12-12 19:44:54.025371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:11.406 BaseBdev3 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 BaseBdev4_malloc 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 [2024-12-12 19:44:54.078605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:11.406 [2024-12-12 19:44:54.078696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.406 [2024-12-12 19:44:54.078743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:11.406 [2024-12-12 19:44:54.078777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.406 [2024-12-12 19:44:54.080797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.406 [2024-12-12 19:44:54.080869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:11.406 BaseBdev4 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 spare_malloc 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 spare_delay 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 [2024-12-12 19:44:54.143964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.406 [2024-12-12 19:44:54.144052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.406 [2024-12-12 19:44:54.144085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:11.406 [2024-12-12 19:44:54.144117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.406 [2024-12-12 19:44:54.146105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.406 [2024-12-12 19:44:54.146180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.406 spare 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.406 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.406 [2024-12-12 19:44:54.155987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.406 [2024-12-12 19:44:54.157736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.406 [2024-12-12 19:44:54.157834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.406 [2024-12-12 19:44:54.157937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:11.407 [2024-12-12 19:44:54.158071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.407 [2024-12-12 19:44:54.158116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:11.407 [2024-12-12 19:44:54.158422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:11.407 [2024-12-12 19:44:54.165407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:11.407 [2024-12-12 19:44:54.165474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:11.407 [2024-12-12 19:44:54.165706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.407 "name": "raid_bdev1", 00:16:11.407 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:11.407 "strip_size_kb": 64, 00:16:11.407 "state": "online", 00:16:11.407 "raid_level": "raid5f", 00:16:11.407 "superblock": false, 00:16:11.407 "num_base_bdevs": 4, 00:16:11.407 "num_base_bdevs_discovered": 4, 00:16:11.407 "num_base_bdevs_operational": 4, 00:16:11.407 "base_bdevs_list": [ 00:16:11.407 { 00:16:11.407 "name": "BaseBdev1", 00:16:11.407 "uuid": "eccd8503-408d-5631-af8e-b9049f56d8f7", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev2", 00:16:11.407 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev3", 00:16:11.407 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev4", 00:16:11.407 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 } 00:16:11.407 ] 00:16:11.407 }' 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.407 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 [2024-12-12 19:44:54.613150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.975 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:12.234 [2024-12-12 19:44:54.876587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.234 /dev/nbd0 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.234 1+0 records in 00:16:12.234 1+0 records out 00:16:12.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651642 s, 6.3 MB/s 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.234 19:44:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:12.235 19:44:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:12.802 512+0 records in 00:16:12.802 512+0 records out 00:16:12.802 100663296 bytes (101 MB, 96 MiB) copied, 0.504367 s, 200 MB/s 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.061 [2024-12-12 19:44:55.662797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.061 [2024-12-12 19:44:55.676069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.061 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.061 "name": "raid_bdev1", 00:16:13.061 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:13.061 "strip_size_kb": 64, 00:16:13.061 "state": "online", 00:16:13.061 "raid_level": "raid5f", 00:16:13.061 "superblock": false, 00:16:13.061 "num_base_bdevs": 4, 00:16:13.061 "num_base_bdevs_discovered": 3, 00:16:13.061 "num_base_bdevs_operational": 3, 00:16:13.061 "base_bdevs_list": [ 00:16:13.061 { 00:16:13.061 "name": null, 00:16:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.061 "is_configured": false, 00:16:13.061 "data_offset": 0, 00:16:13.061 "data_size": 65536 00:16:13.061 }, 00:16:13.061 { 00:16:13.061 "name": "BaseBdev2", 00:16:13.062 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:13.062 "is_configured": true, 00:16:13.062 "data_offset": 0, 00:16:13.062 "data_size": 65536 00:16:13.062 }, 00:16:13.062 { 00:16:13.062 "name": "BaseBdev3", 00:16:13.062 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:13.062 "is_configured": true, 00:16:13.062 "data_offset": 0, 00:16:13.062 "data_size": 65536 00:16:13.062 }, 00:16:13.062 { 00:16:13.062 "name": "BaseBdev4", 00:16:13.062 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:13.062 "is_configured": true, 00:16:13.062 "data_offset": 0, 00:16:13.062 "data_size": 65536 00:16:13.062 } 00:16:13.062 ] 00:16:13.062 }' 00:16:13.062 19:44:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.062 19:44:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.321 19:44:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.321 19:44:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.321 19:44:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.321 [2024-12-12 19:44:56.135270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.321 [2024-12-12 19:44:56.149191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:13.321 19:44:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.321 19:44:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.321 [2024-12-12 19:44:56.157814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.701 "name": "raid_bdev1", 00:16:14.701 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:14.701 "strip_size_kb": 64, 00:16:14.701 "state": "online", 00:16:14.701 "raid_level": "raid5f", 00:16:14.701 "superblock": false, 00:16:14.701 "num_base_bdevs": 4, 00:16:14.701 "num_base_bdevs_discovered": 4, 00:16:14.701 "num_base_bdevs_operational": 4, 00:16:14.701 "process": { 00:16:14.701 "type": "rebuild", 00:16:14.701 "target": "spare", 00:16:14.701 "progress": { 00:16:14.701 "blocks": 19200, 00:16:14.701 "percent": 9 00:16:14.701 } 00:16:14.701 }, 00:16:14.701 "base_bdevs_list": [ 00:16:14.701 { 00:16:14.701 "name": "spare", 00:16:14.701 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:14.701 "is_configured": true, 00:16:14.701 "data_offset": 0, 00:16:14.701 "data_size": 65536 00:16:14.701 }, 00:16:14.701 { 00:16:14.701 "name": "BaseBdev2", 00:16:14.701 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:14.701 "is_configured": true, 00:16:14.701 "data_offset": 0, 00:16:14.701 "data_size": 65536 00:16:14.701 }, 00:16:14.701 { 00:16:14.701 "name": "BaseBdev3", 00:16:14.701 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:14.701 "is_configured": true, 00:16:14.701 "data_offset": 0, 00:16:14.701 "data_size": 65536 00:16:14.701 }, 00:16:14.701 { 00:16:14.701 "name": "BaseBdev4", 00:16:14.701 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:14.701 "is_configured": true, 00:16:14.701 "data_offset": 0, 00:16:14.701 "data_size": 65536 00:16:14.701 } 00:16:14.701 ] 00:16:14.701 }' 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.701 [2024-12-12 19:44:57.316670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.701 [2024-12-12 19:44:57.363802] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.701 [2024-12-12 19:44:57.363921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.701 [2024-12-12 19:44:57.363956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.701 [2024-12-12 19:44:57.363979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.701 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.702 "name": "raid_bdev1", 00:16:14.702 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:14.702 "strip_size_kb": 64, 00:16:14.702 "state": "online", 00:16:14.702 "raid_level": "raid5f", 00:16:14.702 "superblock": false, 00:16:14.702 "num_base_bdevs": 4, 00:16:14.702 "num_base_bdevs_discovered": 3, 00:16:14.702 "num_base_bdevs_operational": 3, 00:16:14.702 "base_bdevs_list": [ 00:16:14.702 { 00:16:14.702 "name": null, 00:16:14.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.702 "is_configured": false, 00:16:14.702 "data_offset": 0, 00:16:14.702 "data_size": 65536 00:16:14.702 }, 00:16:14.702 { 00:16:14.702 "name": "BaseBdev2", 00:16:14.702 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:14.702 "is_configured": true, 00:16:14.702 "data_offset": 0, 00:16:14.702 "data_size": 65536 00:16:14.702 }, 00:16:14.702 { 00:16:14.702 "name": "BaseBdev3", 00:16:14.702 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:14.702 "is_configured": true, 00:16:14.702 "data_offset": 0, 00:16:14.702 "data_size": 65536 00:16:14.702 }, 00:16:14.702 { 00:16:14.702 "name": "BaseBdev4", 00:16:14.702 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:14.702 "is_configured": true, 00:16:14.702 "data_offset": 0, 00:16:14.702 "data_size": 65536 00:16:14.702 } 00:16:14.702 ] 00:16:14.702 }' 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.702 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.271 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.271 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.271 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.272 "name": "raid_bdev1", 00:16:15.272 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:15.272 "strip_size_kb": 64, 00:16:15.272 "state": "online", 00:16:15.272 "raid_level": "raid5f", 00:16:15.272 "superblock": false, 00:16:15.272 "num_base_bdevs": 4, 00:16:15.272 "num_base_bdevs_discovered": 3, 00:16:15.272 "num_base_bdevs_operational": 3, 00:16:15.272 "base_bdevs_list": [ 00:16:15.272 { 00:16:15.272 "name": null, 00:16:15.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.272 "is_configured": false, 00:16:15.272 "data_offset": 0, 00:16:15.272 "data_size": 65536 00:16:15.272 }, 00:16:15.272 { 00:16:15.272 "name": "BaseBdev2", 00:16:15.272 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:15.272 "is_configured": true, 00:16:15.272 "data_offset": 0, 00:16:15.272 "data_size": 65536 00:16:15.272 }, 00:16:15.272 { 00:16:15.272 "name": "BaseBdev3", 00:16:15.272 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:15.272 "is_configured": true, 00:16:15.272 "data_offset": 0, 00:16:15.272 "data_size": 65536 00:16:15.272 }, 00:16:15.272 { 00:16:15.272 "name": "BaseBdev4", 00:16:15.272 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:15.272 "is_configured": true, 00:16:15.272 "data_offset": 0, 00:16:15.272 "data_size": 65536 00:16:15.272 } 00:16:15.272 ] 00:16:15.272 }' 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.272 [2024-12-12 19:44:57.975448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.272 [2024-12-12 19:44:57.989620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.272 19:44:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:15.272 [2024-12-12 19:44:57.998235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.222 19:44:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.222 "name": "raid_bdev1", 00:16:16.222 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:16.222 "strip_size_kb": 64, 00:16:16.222 "state": "online", 00:16:16.222 "raid_level": "raid5f", 00:16:16.222 "superblock": false, 00:16:16.222 "num_base_bdevs": 4, 00:16:16.222 "num_base_bdevs_discovered": 4, 00:16:16.222 "num_base_bdevs_operational": 4, 00:16:16.222 "process": { 00:16:16.222 "type": "rebuild", 00:16:16.222 "target": "spare", 00:16:16.222 "progress": { 00:16:16.222 "blocks": 19200, 00:16:16.222 "percent": 9 00:16:16.222 } 00:16:16.222 }, 00:16:16.222 "base_bdevs_list": [ 00:16:16.222 { 00:16:16.222 "name": "spare", 00:16:16.222 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:16.222 "is_configured": true, 00:16:16.222 "data_offset": 0, 00:16:16.222 "data_size": 65536 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "name": "BaseBdev2", 00:16:16.222 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:16.222 "is_configured": true, 00:16:16.222 "data_offset": 0, 00:16:16.222 "data_size": 65536 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "name": "BaseBdev3", 00:16:16.222 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:16.222 "is_configured": true, 00:16:16.222 "data_offset": 0, 00:16:16.222 "data_size": 65536 00:16:16.222 }, 00:16:16.222 { 00:16:16.222 "name": "BaseBdev4", 00:16:16.222 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:16.222 "is_configured": true, 00:16:16.222 "data_offset": 0, 00:16:16.222 "data_size": 65536 00:16:16.222 } 00:16:16.222 ] 00:16:16.222 }' 00:16:16.222 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.481 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.482 "name": "raid_bdev1", 00:16:16.482 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:16.482 "strip_size_kb": 64, 00:16:16.482 "state": "online", 00:16:16.482 "raid_level": "raid5f", 00:16:16.482 "superblock": false, 00:16:16.482 "num_base_bdevs": 4, 00:16:16.482 "num_base_bdevs_discovered": 4, 00:16:16.482 "num_base_bdevs_operational": 4, 00:16:16.482 "process": { 00:16:16.482 "type": "rebuild", 00:16:16.482 "target": "spare", 00:16:16.482 "progress": { 00:16:16.482 "blocks": 21120, 00:16:16.482 "percent": 10 00:16:16.482 } 00:16:16.482 }, 00:16:16.482 "base_bdevs_list": [ 00:16:16.482 { 00:16:16.482 "name": "spare", 00:16:16.482 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:16.482 "is_configured": true, 00:16:16.482 "data_offset": 0, 00:16:16.482 "data_size": 65536 00:16:16.482 }, 00:16:16.482 { 00:16:16.482 "name": "BaseBdev2", 00:16:16.482 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:16.482 "is_configured": true, 00:16:16.482 "data_offset": 0, 00:16:16.482 "data_size": 65536 00:16:16.482 }, 00:16:16.482 { 00:16:16.482 "name": "BaseBdev3", 00:16:16.482 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:16.482 "is_configured": true, 00:16:16.482 "data_offset": 0, 00:16:16.482 "data_size": 65536 00:16:16.482 }, 00:16:16.482 { 00:16:16.482 "name": "BaseBdev4", 00:16:16.482 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:16.482 "is_configured": true, 00:16:16.482 "data_offset": 0, 00:16:16.482 "data_size": 65536 00:16:16.482 } 00:16:16.482 ] 00:16:16.482 }' 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.482 19:44:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.860 "name": "raid_bdev1", 00:16:17.860 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:17.860 "strip_size_kb": 64, 00:16:17.860 "state": "online", 00:16:17.860 "raid_level": "raid5f", 00:16:17.860 "superblock": false, 00:16:17.860 "num_base_bdevs": 4, 00:16:17.860 "num_base_bdevs_discovered": 4, 00:16:17.860 "num_base_bdevs_operational": 4, 00:16:17.860 "process": { 00:16:17.860 "type": "rebuild", 00:16:17.860 "target": "spare", 00:16:17.860 "progress": { 00:16:17.860 "blocks": 44160, 00:16:17.860 "percent": 22 00:16:17.860 } 00:16:17.860 }, 00:16:17.860 "base_bdevs_list": [ 00:16:17.860 { 00:16:17.860 "name": "spare", 00:16:17.860 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:17.860 "is_configured": true, 00:16:17.860 "data_offset": 0, 00:16:17.860 "data_size": 65536 00:16:17.860 }, 00:16:17.860 { 00:16:17.860 "name": "BaseBdev2", 00:16:17.860 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:17.860 "is_configured": true, 00:16:17.860 "data_offset": 0, 00:16:17.860 "data_size": 65536 00:16:17.860 }, 00:16:17.860 { 00:16:17.860 "name": "BaseBdev3", 00:16:17.860 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:17.860 "is_configured": true, 00:16:17.860 "data_offset": 0, 00:16:17.860 "data_size": 65536 00:16:17.860 }, 00:16:17.860 { 00:16:17.860 "name": "BaseBdev4", 00:16:17.860 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:17.860 "is_configured": true, 00:16:17.860 "data_offset": 0, 00:16:17.860 "data_size": 65536 00:16:17.860 } 00:16:17.860 ] 00:16:17.860 }' 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.860 19:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.796 "name": "raid_bdev1", 00:16:18.796 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:18.796 "strip_size_kb": 64, 00:16:18.796 "state": "online", 00:16:18.796 "raid_level": "raid5f", 00:16:18.796 "superblock": false, 00:16:18.796 "num_base_bdevs": 4, 00:16:18.796 "num_base_bdevs_discovered": 4, 00:16:18.796 "num_base_bdevs_operational": 4, 00:16:18.796 "process": { 00:16:18.796 "type": "rebuild", 00:16:18.796 "target": "spare", 00:16:18.796 "progress": { 00:16:18.796 "blocks": 65280, 00:16:18.796 "percent": 33 00:16:18.796 } 00:16:18.796 }, 00:16:18.796 "base_bdevs_list": [ 00:16:18.796 { 00:16:18.796 "name": "spare", 00:16:18.796 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:18.796 "is_configured": true, 00:16:18.796 "data_offset": 0, 00:16:18.796 "data_size": 65536 00:16:18.796 }, 00:16:18.796 { 00:16:18.796 "name": "BaseBdev2", 00:16:18.796 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:18.796 "is_configured": true, 00:16:18.796 "data_offset": 0, 00:16:18.796 "data_size": 65536 00:16:18.796 }, 00:16:18.796 { 00:16:18.796 "name": "BaseBdev3", 00:16:18.796 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:18.796 "is_configured": true, 00:16:18.796 "data_offset": 0, 00:16:18.796 "data_size": 65536 00:16:18.796 }, 00:16:18.796 { 00:16:18.796 "name": "BaseBdev4", 00:16:18.796 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:18.796 "is_configured": true, 00:16:18.796 "data_offset": 0, 00:16:18.796 "data_size": 65536 00:16:18.796 } 00:16:18.796 ] 00:16:18.796 }' 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.796 19:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.174 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.175 "name": "raid_bdev1", 00:16:20.175 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:20.175 "strip_size_kb": 64, 00:16:20.175 "state": "online", 00:16:20.175 "raid_level": "raid5f", 00:16:20.175 "superblock": false, 00:16:20.175 "num_base_bdevs": 4, 00:16:20.175 "num_base_bdevs_discovered": 4, 00:16:20.175 "num_base_bdevs_operational": 4, 00:16:20.175 "process": { 00:16:20.175 "type": "rebuild", 00:16:20.175 "target": "spare", 00:16:20.175 "progress": { 00:16:20.175 "blocks": 88320, 00:16:20.175 "percent": 44 00:16:20.175 } 00:16:20.175 }, 00:16:20.175 "base_bdevs_list": [ 00:16:20.175 { 00:16:20.175 "name": "spare", 00:16:20.175 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:20.175 "is_configured": true, 00:16:20.175 "data_offset": 0, 00:16:20.175 "data_size": 65536 00:16:20.175 }, 00:16:20.175 { 00:16:20.175 "name": "BaseBdev2", 00:16:20.175 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:20.175 "is_configured": true, 00:16:20.175 "data_offset": 0, 00:16:20.175 "data_size": 65536 00:16:20.175 }, 00:16:20.175 { 00:16:20.175 "name": "BaseBdev3", 00:16:20.175 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:20.175 "is_configured": true, 00:16:20.175 "data_offset": 0, 00:16:20.175 "data_size": 65536 00:16:20.175 }, 00:16:20.175 { 00:16:20.175 "name": "BaseBdev4", 00:16:20.175 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:20.175 "is_configured": true, 00:16:20.175 "data_offset": 0, 00:16:20.175 "data_size": 65536 00:16:20.175 } 00:16:20.175 ] 00:16:20.175 }' 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.175 19:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.113 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.113 "name": "raid_bdev1", 00:16:21.113 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:21.113 "strip_size_kb": 64, 00:16:21.113 "state": "online", 00:16:21.113 "raid_level": "raid5f", 00:16:21.113 "superblock": false, 00:16:21.113 "num_base_bdevs": 4, 00:16:21.113 "num_base_bdevs_discovered": 4, 00:16:21.113 "num_base_bdevs_operational": 4, 00:16:21.113 "process": { 00:16:21.113 "type": "rebuild", 00:16:21.113 "target": "spare", 00:16:21.113 "progress": { 00:16:21.113 "blocks": 109440, 00:16:21.113 "percent": 55 00:16:21.113 } 00:16:21.113 }, 00:16:21.113 "base_bdevs_list": [ 00:16:21.113 { 00:16:21.113 "name": "spare", 00:16:21.113 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:21.113 "is_configured": true, 00:16:21.113 "data_offset": 0, 00:16:21.113 "data_size": 65536 00:16:21.113 }, 00:16:21.113 { 00:16:21.113 "name": "BaseBdev2", 00:16:21.113 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:21.114 "is_configured": true, 00:16:21.114 "data_offset": 0, 00:16:21.114 "data_size": 65536 00:16:21.114 }, 00:16:21.114 { 00:16:21.114 "name": "BaseBdev3", 00:16:21.114 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:21.114 "is_configured": true, 00:16:21.114 "data_offset": 0, 00:16:21.114 "data_size": 65536 00:16:21.114 }, 00:16:21.114 { 00:16:21.114 "name": "BaseBdev4", 00:16:21.114 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:21.114 "is_configured": true, 00:16:21.114 "data_offset": 0, 00:16:21.114 "data_size": 65536 00:16:21.114 } 00:16:21.114 ] 00:16:21.114 }' 00:16:21.114 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.114 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.114 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.114 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.114 19:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.493 "name": "raid_bdev1", 00:16:22.493 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:22.493 "strip_size_kb": 64, 00:16:22.493 "state": "online", 00:16:22.493 "raid_level": "raid5f", 00:16:22.493 "superblock": false, 00:16:22.493 "num_base_bdevs": 4, 00:16:22.493 "num_base_bdevs_discovered": 4, 00:16:22.493 "num_base_bdevs_operational": 4, 00:16:22.493 "process": { 00:16:22.493 "type": "rebuild", 00:16:22.493 "target": "spare", 00:16:22.493 "progress": { 00:16:22.493 "blocks": 130560, 00:16:22.493 "percent": 66 00:16:22.493 } 00:16:22.493 }, 00:16:22.493 "base_bdevs_list": [ 00:16:22.493 { 00:16:22.493 "name": "spare", 00:16:22.493 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:22.493 "is_configured": true, 00:16:22.493 "data_offset": 0, 00:16:22.493 "data_size": 65536 00:16:22.493 }, 00:16:22.493 { 00:16:22.493 "name": "BaseBdev2", 00:16:22.493 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:22.493 "is_configured": true, 00:16:22.493 "data_offset": 0, 00:16:22.493 "data_size": 65536 00:16:22.493 }, 00:16:22.493 { 00:16:22.493 "name": "BaseBdev3", 00:16:22.493 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:22.493 "is_configured": true, 00:16:22.493 "data_offset": 0, 00:16:22.493 "data_size": 65536 00:16:22.493 }, 00:16:22.493 { 00:16:22.493 "name": "BaseBdev4", 00:16:22.493 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:22.493 "is_configured": true, 00:16:22.493 "data_offset": 0, 00:16:22.493 "data_size": 65536 00:16:22.493 } 00:16:22.493 ] 00:16:22.493 }' 00:16:22.493 19:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.493 19:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.493 19:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.493 19:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.493 19:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.432 "name": "raid_bdev1", 00:16:23.432 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:23.432 "strip_size_kb": 64, 00:16:23.432 "state": "online", 00:16:23.432 "raid_level": "raid5f", 00:16:23.432 "superblock": false, 00:16:23.432 "num_base_bdevs": 4, 00:16:23.432 "num_base_bdevs_discovered": 4, 00:16:23.432 "num_base_bdevs_operational": 4, 00:16:23.432 "process": { 00:16:23.432 "type": "rebuild", 00:16:23.432 "target": "spare", 00:16:23.432 "progress": { 00:16:23.432 "blocks": 153600, 00:16:23.432 "percent": 78 00:16:23.432 } 00:16:23.432 }, 00:16:23.432 "base_bdevs_list": [ 00:16:23.432 { 00:16:23.432 "name": "spare", 00:16:23.432 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:23.432 "is_configured": true, 00:16:23.432 "data_offset": 0, 00:16:23.432 "data_size": 65536 00:16:23.432 }, 00:16:23.432 { 00:16:23.432 "name": "BaseBdev2", 00:16:23.432 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:23.432 "is_configured": true, 00:16:23.432 "data_offset": 0, 00:16:23.432 "data_size": 65536 00:16:23.432 }, 00:16:23.432 { 00:16:23.432 "name": "BaseBdev3", 00:16:23.432 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:23.432 "is_configured": true, 00:16:23.432 "data_offset": 0, 00:16:23.432 "data_size": 65536 00:16:23.432 }, 00:16:23.432 { 00:16:23.432 "name": "BaseBdev4", 00:16:23.432 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:23.432 "is_configured": true, 00:16:23.432 "data_offset": 0, 00:16:23.432 "data_size": 65536 00:16:23.432 } 00:16:23.432 ] 00:16:23.432 }' 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.432 19:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.812 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.813 "name": "raid_bdev1", 00:16:24.813 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:24.813 "strip_size_kb": 64, 00:16:24.813 "state": "online", 00:16:24.813 "raid_level": "raid5f", 00:16:24.813 "superblock": false, 00:16:24.813 "num_base_bdevs": 4, 00:16:24.813 "num_base_bdevs_discovered": 4, 00:16:24.813 "num_base_bdevs_operational": 4, 00:16:24.813 "process": { 00:16:24.813 "type": "rebuild", 00:16:24.813 "target": "spare", 00:16:24.813 "progress": { 00:16:24.813 "blocks": 174720, 00:16:24.813 "percent": 88 00:16:24.813 } 00:16:24.813 }, 00:16:24.813 "base_bdevs_list": [ 00:16:24.813 { 00:16:24.813 "name": "spare", 00:16:24.813 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:24.813 "is_configured": true, 00:16:24.813 "data_offset": 0, 00:16:24.813 "data_size": 65536 00:16:24.813 }, 00:16:24.813 { 00:16:24.813 "name": "BaseBdev2", 00:16:24.813 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:24.813 "is_configured": true, 00:16:24.813 "data_offset": 0, 00:16:24.813 "data_size": 65536 00:16:24.813 }, 00:16:24.813 { 00:16:24.813 "name": "BaseBdev3", 00:16:24.813 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:24.813 "is_configured": true, 00:16:24.813 "data_offset": 0, 00:16:24.813 "data_size": 65536 00:16:24.813 }, 00:16:24.813 { 00:16:24.813 "name": "BaseBdev4", 00:16:24.813 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:24.813 "is_configured": true, 00:16:24.813 "data_offset": 0, 00:16:24.813 "data_size": 65536 00:16:24.813 } 00:16:24.813 ] 00:16:24.813 }' 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.813 19:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.753 [2024-12-12 19:45:08.344365] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:25.753 [2024-12-12 19:45:08.344473] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:25.753 [2024-12-12 19:45:08.344534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.753 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.753 "name": "raid_bdev1", 00:16:25.753 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:25.753 "strip_size_kb": 64, 00:16:25.753 "state": "online", 00:16:25.753 "raid_level": "raid5f", 00:16:25.753 "superblock": false, 00:16:25.753 "num_base_bdevs": 4, 00:16:25.753 "num_base_bdevs_discovered": 4, 00:16:25.753 "num_base_bdevs_operational": 4, 00:16:25.753 "base_bdevs_list": [ 00:16:25.753 { 00:16:25.753 "name": "spare", 00:16:25.753 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:25.753 "is_configured": true, 00:16:25.753 "data_offset": 0, 00:16:25.753 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev2", 00:16:25.754 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev3", 00:16:25.754 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev4", 00:16:25.754 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 } 00:16:25.754 ] 00:16:25.754 }' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.754 "name": "raid_bdev1", 00:16:25.754 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:25.754 "strip_size_kb": 64, 00:16:25.754 "state": "online", 00:16:25.754 "raid_level": "raid5f", 00:16:25.754 "superblock": false, 00:16:25.754 "num_base_bdevs": 4, 00:16:25.754 "num_base_bdevs_discovered": 4, 00:16:25.754 "num_base_bdevs_operational": 4, 00:16:25.754 "base_bdevs_list": [ 00:16:25.754 { 00:16:25.754 "name": "spare", 00:16:25.754 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev2", 00:16:25.754 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev3", 00:16:25.754 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 }, 00:16:25.754 { 00:16:25.754 "name": "BaseBdev4", 00:16:25.754 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:25.754 "is_configured": true, 00:16:25.754 "data_offset": 0, 00:16:25.754 "data_size": 65536 00:16:25.754 } 00:16:25.754 ] 00:16:25.754 }' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.754 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.013 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.013 "name": "raid_bdev1", 00:16:26.013 "uuid": "6003b55c-7ded-445c-870d-94da9dfe3fba", 00:16:26.013 "strip_size_kb": 64, 00:16:26.013 "state": "online", 00:16:26.013 "raid_level": "raid5f", 00:16:26.013 "superblock": false, 00:16:26.013 "num_base_bdevs": 4, 00:16:26.013 "num_base_bdevs_discovered": 4, 00:16:26.014 "num_base_bdevs_operational": 4, 00:16:26.014 "base_bdevs_list": [ 00:16:26.014 { 00:16:26.014 "name": "spare", 00:16:26.014 "uuid": "0305879e-ff67-5d68-92a3-6fc3befcc793", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev2", 00:16:26.014 "uuid": "fdb53ca1-997a-5a6f-b785-b0f38c2385fe", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev3", 00:16:26.014 "uuid": "d0a04ade-85de-5227-b3c1-cb336e5175dd", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev4", 00:16:26.014 "uuid": "2c34c1e7-8798-5eb9-a2b2-088764532578", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 } 00:16:26.014 ] 00:16:26.014 }' 00:16:26.014 19:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.014 19:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.273 [2024-12-12 19:45:09.080000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.273 [2024-12-12 19:45:09.080037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.273 [2024-12-12 19:45:09.080117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.273 [2024-12-12 19:45:09.080205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.273 [2024-12-12 19:45:09.080214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.273 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:26.532 /dev/nbd0 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.532 1+0 records in 00:16:26.532 1+0 records out 00:16:26.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457228 s, 9.0 MB/s 00:16:26.532 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:26.792 /dev/nbd1 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.792 1+0 records in 00:16:26.792 1+0 records out 00:16:26.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425277 s, 9.6 MB/s 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.792 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.052 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.312 19:45:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.312 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.312 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.312 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.312 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86271 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86271 ']' 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86271 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86271 00:16:27.572 killing process with pid 86271 00:16:27.572 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.572 00:16:27.572 Latency(us) 00:16:27.572 [2024-12-12T19:45:10.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.572 [2024-12-12T19:45:10.417Z] =================================================================================================================== 00:16:27.572 [2024-12-12T19:45:10.417Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86271' 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 86271 00:16:27.572 [2024-12-12 19:45:10.266194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.572 19:45:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 86271 00:16:28.140 [2024-12-12 19:45:10.725241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.078 ************************************ 00:16:29.078 END TEST raid5f_rebuild_test 00:16:29.078 ************************************ 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.078 00:16:29.078 real 0m18.844s 00:16:29.078 user 0m22.612s 00:16:29.078 sys 0m2.275s 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.078 19:45:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:29.078 19:45:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:29.078 19:45:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.078 19:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.078 ************************************ 00:16:29.078 START TEST raid5f_rebuild_test_sb 00:16:29.078 ************************************ 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.078 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86764 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86764 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86764 ']' 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.079 19:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.338 [2024-12-12 19:45:11.949131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:29.338 [2024-12-12 19:45:11.949280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.338 Zero copy mechanism will not be used. 00:16:29.338 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86764 ] 00:16:29.338 [2024-12-12 19:45:12.114280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.597 [2024-12-12 19:45:12.225072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.597 [2024-12-12 19:45:12.417176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.597 [2024-12-12 19:45:12.417261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 BaseBdev1_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 [2024-12-12 19:45:12.811842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.167 [2024-12-12 19:45:12.811902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.167 [2024-12-12 19:45:12.811922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.167 [2024-12-12 19:45:12.811933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.167 [2024-12-12 19:45:12.813964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.167 [2024-12-12 19:45:12.814070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.167 BaseBdev1 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 BaseBdev2_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 [2024-12-12 19:45:12.864143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.167 [2024-12-12 19:45:12.864202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.167 [2024-12-12 19:45:12.864219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.167 [2024-12-12 19:45:12.864230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.167 [2024-12-12 19:45:12.866196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.167 [2024-12-12 19:45:12.866234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.167 BaseBdev2 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 BaseBdev3_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 [2024-12-12 19:45:12.948886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:30.167 [2024-12-12 19:45:12.948937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.167 [2024-12-12 19:45:12.948957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.167 [2024-12-12 19:45:12.948967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.167 [2024-12-12 19:45:12.950885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.167 [2024-12-12 19:45:12.950925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.167 BaseBdev3 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 BaseBdev4_malloc 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.167 [2024-12-12 19:45:13.002466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:30.167 [2024-12-12 19:45:13.002569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.167 [2024-12-12 19:45:13.002593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:30.167 [2024-12-12 19:45:13.002604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.167 [2024-12-12 19:45:13.004527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.167 [2024-12-12 19:45:13.004572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:30.167 BaseBdev4 00:16:30.167 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.167 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:30.167 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.167 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.428 spare_malloc 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.428 spare_delay 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.428 [2024-12-12 19:45:13.068303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.428 [2024-12-12 19:45:13.068353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.428 [2024-12-12 19:45:13.068369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:30.428 [2024-12-12 19:45:13.068379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.428 [2024-12-12 19:45:13.070411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.428 [2024-12-12 19:45:13.070453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.428 spare 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.428 [2024-12-12 19:45:13.080334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.428 [2024-12-12 19:45:13.081996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.428 [2024-12-12 19:45:13.082054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.428 [2024-12-12 19:45:13.082100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.428 [2024-12-12 19:45:13.082307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.428 [2024-12-12 19:45:13.082325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:30.428 [2024-12-12 19:45:13.082579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:30.428 [2024-12-12 19:45:13.089308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.428 [2024-12-12 19:45:13.089329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.428 [2024-12-12 19:45:13.089495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.428 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.429 "name": "raid_bdev1", 00:16:30.429 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:30.429 "strip_size_kb": 64, 00:16:30.429 "state": "online", 00:16:30.429 "raid_level": "raid5f", 00:16:30.429 "superblock": true, 00:16:30.429 "num_base_bdevs": 4, 00:16:30.429 "num_base_bdevs_discovered": 4, 00:16:30.429 "num_base_bdevs_operational": 4, 00:16:30.429 "base_bdevs_list": [ 00:16:30.429 { 00:16:30.429 "name": "BaseBdev1", 00:16:30.429 "uuid": "e305a5b2-99c0-5759-a9bd-3aae3881ab85", 00:16:30.429 "is_configured": true, 00:16:30.429 "data_offset": 2048, 00:16:30.429 "data_size": 63488 00:16:30.429 }, 00:16:30.429 { 00:16:30.429 "name": "BaseBdev2", 00:16:30.429 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:30.429 "is_configured": true, 00:16:30.429 "data_offset": 2048, 00:16:30.429 "data_size": 63488 00:16:30.429 }, 00:16:30.429 { 00:16:30.429 "name": "BaseBdev3", 00:16:30.429 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:30.429 "is_configured": true, 00:16:30.429 "data_offset": 2048, 00:16:30.429 "data_size": 63488 00:16:30.429 }, 00:16:30.429 { 00:16:30.429 "name": "BaseBdev4", 00:16:30.429 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:30.429 "is_configured": true, 00:16:30.429 "data_offset": 2048, 00:16:30.429 "data_size": 63488 00:16:30.429 } 00:16:30.429 ] 00:16:30.429 }' 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.429 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 [2024-12-12 19:45:13.544777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.998 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.999 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.999 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.999 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:30.999 [2024-12-12 19:45:13.800125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:30.999 /dev/nbd0 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.259 1+0 records in 00:16:31.259 1+0 records out 00:16:31.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290575 s, 14.1 MB/s 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:31.259 19:45:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:31.829 496+0 records in 00:16:31.829 496+0 records out 00:16:31.829 97517568 bytes (98 MB, 93 MiB) copied, 0.479437 s, 203 MB/s 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.829 [2024-12-12 19:45:14.576649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.829 [2024-12-12 19:45:14.585789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.829 "name": "raid_bdev1", 00:16:31.829 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:31.829 "strip_size_kb": 64, 00:16:31.829 "state": "online", 00:16:31.829 "raid_level": "raid5f", 00:16:31.829 "superblock": true, 00:16:31.829 "num_base_bdevs": 4, 00:16:31.829 "num_base_bdevs_discovered": 3, 00:16:31.829 "num_base_bdevs_operational": 3, 00:16:31.829 "base_bdevs_list": [ 00:16:31.829 { 00:16:31.829 "name": null, 00:16:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.829 "is_configured": false, 00:16:31.829 "data_offset": 0, 00:16:31.829 "data_size": 63488 00:16:31.829 }, 00:16:31.829 { 00:16:31.829 "name": "BaseBdev2", 00:16:31.829 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:31.829 "is_configured": true, 00:16:31.829 "data_offset": 2048, 00:16:31.829 "data_size": 63488 00:16:31.829 }, 00:16:31.829 { 00:16:31.829 "name": "BaseBdev3", 00:16:31.829 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:31.829 "is_configured": true, 00:16:31.829 "data_offset": 2048, 00:16:31.829 "data_size": 63488 00:16:31.829 }, 00:16:31.829 { 00:16:31.829 "name": "BaseBdev4", 00:16:31.829 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:31.829 "is_configured": true, 00:16:31.829 "data_offset": 2048, 00:16:31.829 "data_size": 63488 00:16:31.829 } 00:16:31.829 ] 00:16:31.829 }' 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.829 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.089 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.089 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.089 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.089 [2024-12-12 19:45:14.925168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.349 [2024-12-12 19:45:14.940247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:32.349 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.349 19:45:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:32.349 [2024-12-12 19:45:14.948915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.286 19:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.286 "name": "raid_bdev1", 00:16:33.286 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:33.286 "strip_size_kb": 64, 00:16:33.286 "state": "online", 00:16:33.286 "raid_level": "raid5f", 00:16:33.286 "superblock": true, 00:16:33.286 "num_base_bdevs": 4, 00:16:33.286 "num_base_bdevs_discovered": 4, 00:16:33.286 "num_base_bdevs_operational": 4, 00:16:33.286 "process": { 00:16:33.286 "type": "rebuild", 00:16:33.286 "target": "spare", 00:16:33.286 "progress": { 00:16:33.286 "blocks": 19200, 00:16:33.286 "percent": 10 00:16:33.286 } 00:16:33.286 }, 00:16:33.286 "base_bdevs_list": [ 00:16:33.286 { 00:16:33.286 "name": "spare", 00:16:33.286 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:33.286 "is_configured": true, 00:16:33.286 "data_offset": 2048, 00:16:33.286 "data_size": 63488 00:16:33.286 }, 00:16:33.286 { 00:16:33.286 "name": "BaseBdev2", 00:16:33.286 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:33.286 "is_configured": true, 00:16:33.286 "data_offset": 2048, 00:16:33.286 "data_size": 63488 00:16:33.286 }, 00:16:33.286 { 00:16:33.286 "name": "BaseBdev3", 00:16:33.286 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:33.286 "is_configured": true, 00:16:33.286 "data_offset": 2048, 00:16:33.286 "data_size": 63488 00:16:33.286 }, 00:16:33.286 { 00:16:33.286 "name": "BaseBdev4", 00:16:33.286 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:33.286 "is_configured": true, 00:16:33.286 "data_offset": 2048, 00:16:33.286 "data_size": 63488 00:16:33.286 } 00:16:33.286 ] 00:16:33.286 }' 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.286 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.286 [2024-12-12 19:45:16.103730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.545 [2024-12-12 19:45:16.154874] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.545 [2024-12-12 19:45:16.154937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.545 [2024-12-12 19:45:16.154953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.545 [2024-12-12 19:45:16.154962] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.545 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.545 "name": "raid_bdev1", 00:16:33.545 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:33.545 "strip_size_kb": 64, 00:16:33.545 "state": "online", 00:16:33.546 "raid_level": "raid5f", 00:16:33.546 "superblock": true, 00:16:33.546 "num_base_bdevs": 4, 00:16:33.546 "num_base_bdevs_discovered": 3, 00:16:33.546 "num_base_bdevs_operational": 3, 00:16:33.546 "base_bdevs_list": [ 00:16:33.546 { 00:16:33.546 "name": null, 00:16:33.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.546 "is_configured": false, 00:16:33.546 "data_offset": 0, 00:16:33.546 "data_size": 63488 00:16:33.546 }, 00:16:33.546 { 00:16:33.546 "name": "BaseBdev2", 00:16:33.546 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:33.546 "is_configured": true, 00:16:33.546 "data_offset": 2048, 00:16:33.546 "data_size": 63488 00:16:33.546 }, 00:16:33.546 { 00:16:33.546 "name": "BaseBdev3", 00:16:33.546 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:33.546 "is_configured": true, 00:16:33.546 "data_offset": 2048, 00:16:33.546 "data_size": 63488 00:16:33.546 }, 00:16:33.546 { 00:16:33.546 "name": "BaseBdev4", 00:16:33.546 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:33.546 "is_configured": true, 00:16:33.546 "data_offset": 2048, 00:16:33.546 "data_size": 63488 00:16:33.546 } 00:16:33.546 ] 00:16:33.546 }' 00:16:33.546 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.546 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.805 "name": "raid_bdev1", 00:16:33.805 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:33.805 "strip_size_kb": 64, 00:16:33.805 "state": "online", 00:16:33.805 "raid_level": "raid5f", 00:16:33.805 "superblock": true, 00:16:33.805 "num_base_bdevs": 4, 00:16:33.805 "num_base_bdevs_discovered": 3, 00:16:33.805 "num_base_bdevs_operational": 3, 00:16:33.805 "base_bdevs_list": [ 00:16:33.805 { 00:16:33.805 "name": null, 00:16:33.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.805 "is_configured": false, 00:16:33.805 "data_offset": 0, 00:16:33.805 "data_size": 63488 00:16:33.805 }, 00:16:33.805 { 00:16:33.805 "name": "BaseBdev2", 00:16:33.805 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:33.805 "is_configured": true, 00:16:33.805 "data_offset": 2048, 00:16:33.805 "data_size": 63488 00:16:33.805 }, 00:16:33.805 { 00:16:33.805 "name": "BaseBdev3", 00:16:33.805 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:33.805 "is_configured": true, 00:16:33.805 "data_offset": 2048, 00:16:33.805 "data_size": 63488 00:16:33.805 }, 00:16:33.805 { 00:16:33.805 "name": "BaseBdev4", 00:16:33.805 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:33.805 "is_configured": true, 00:16:33.805 "data_offset": 2048, 00:16:33.805 "data_size": 63488 00:16:33.805 } 00:16:33.805 ] 00:16:33.805 }' 00:16:33.805 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.064 [2024-12-12 19:45:16.734941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.064 [2024-12-12 19:45:16.748326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.064 19:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:34.064 [2024-12-12 19:45:16.756788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.001 "name": "raid_bdev1", 00:16:35.001 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:35.001 "strip_size_kb": 64, 00:16:35.001 "state": "online", 00:16:35.001 "raid_level": "raid5f", 00:16:35.001 "superblock": true, 00:16:35.001 "num_base_bdevs": 4, 00:16:35.001 "num_base_bdevs_discovered": 4, 00:16:35.001 "num_base_bdevs_operational": 4, 00:16:35.001 "process": { 00:16:35.001 "type": "rebuild", 00:16:35.001 "target": "spare", 00:16:35.001 "progress": { 00:16:35.001 "blocks": 19200, 00:16:35.001 "percent": 10 00:16:35.001 } 00:16:35.001 }, 00:16:35.001 "base_bdevs_list": [ 00:16:35.001 { 00:16:35.001 "name": "spare", 00:16:35.001 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:35.001 "is_configured": true, 00:16:35.001 "data_offset": 2048, 00:16:35.001 "data_size": 63488 00:16:35.001 }, 00:16:35.001 { 00:16:35.001 "name": "BaseBdev2", 00:16:35.001 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:35.001 "is_configured": true, 00:16:35.001 "data_offset": 2048, 00:16:35.001 "data_size": 63488 00:16:35.001 }, 00:16:35.001 { 00:16:35.001 "name": "BaseBdev3", 00:16:35.001 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:35.001 "is_configured": true, 00:16:35.001 "data_offset": 2048, 00:16:35.001 "data_size": 63488 00:16:35.001 }, 00:16:35.001 { 00:16:35.001 "name": "BaseBdev4", 00:16:35.001 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:35.001 "is_configured": true, 00:16:35.001 "data_offset": 2048, 00:16:35.001 "data_size": 63488 00:16:35.001 } 00:16:35.001 ] 00:16:35.001 }' 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.001 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:35.261 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.261 "name": "raid_bdev1", 00:16:35.261 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:35.261 "strip_size_kb": 64, 00:16:35.261 "state": "online", 00:16:35.261 "raid_level": "raid5f", 00:16:35.261 "superblock": true, 00:16:35.261 "num_base_bdevs": 4, 00:16:35.261 "num_base_bdevs_discovered": 4, 00:16:35.261 "num_base_bdevs_operational": 4, 00:16:35.261 "process": { 00:16:35.261 "type": "rebuild", 00:16:35.261 "target": "spare", 00:16:35.261 "progress": { 00:16:35.261 "blocks": 21120, 00:16:35.261 "percent": 11 00:16:35.261 } 00:16:35.261 }, 00:16:35.261 "base_bdevs_list": [ 00:16:35.261 { 00:16:35.261 "name": "spare", 00:16:35.261 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:35.261 "is_configured": true, 00:16:35.261 "data_offset": 2048, 00:16:35.261 "data_size": 63488 00:16:35.261 }, 00:16:35.261 { 00:16:35.261 "name": "BaseBdev2", 00:16:35.261 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:35.261 "is_configured": true, 00:16:35.261 "data_offset": 2048, 00:16:35.261 "data_size": 63488 00:16:35.261 }, 00:16:35.261 { 00:16:35.261 "name": "BaseBdev3", 00:16:35.261 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:35.261 "is_configured": true, 00:16:35.261 "data_offset": 2048, 00:16:35.261 "data_size": 63488 00:16:35.261 }, 00:16:35.261 { 00:16:35.261 "name": "BaseBdev4", 00:16:35.261 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:35.261 "is_configured": true, 00:16:35.261 "data_offset": 2048, 00:16:35.261 "data_size": 63488 00:16:35.261 } 00:16:35.261 ] 00:16:35.261 }' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.261 19:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.261 19:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.261 19:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.246 "name": "raid_bdev1", 00:16:36.246 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:36.246 "strip_size_kb": 64, 00:16:36.246 "state": "online", 00:16:36.246 "raid_level": "raid5f", 00:16:36.246 "superblock": true, 00:16:36.246 "num_base_bdevs": 4, 00:16:36.246 "num_base_bdevs_discovered": 4, 00:16:36.246 "num_base_bdevs_operational": 4, 00:16:36.246 "process": { 00:16:36.246 "type": "rebuild", 00:16:36.246 "target": "spare", 00:16:36.246 "progress": { 00:16:36.246 "blocks": 42240, 00:16:36.246 "percent": 22 00:16:36.246 } 00:16:36.246 }, 00:16:36.246 "base_bdevs_list": [ 00:16:36.246 { 00:16:36.246 "name": "spare", 00:16:36.246 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:36.246 "is_configured": true, 00:16:36.246 "data_offset": 2048, 00:16:36.246 "data_size": 63488 00:16:36.246 }, 00:16:36.246 { 00:16:36.246 "name": "BaseBdev2", 00:16:36.246 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:36.246 "is_configured": true, 00:16:36.246 "data_offset": 2048, 00:16:36.246 "data_size": 63488 00:16:36.246 }, 00:16:36.246 { 00:16:36.246 "name": "BaseBdev3", 00:16:36.246 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:36.246 "is_configured": true, 00:16:36.246 "data_offset": 2048, 00:16:36.246 "data_size": 63488 00:16:36.246 }, 00:16:36.246 { 00:16:36.246 "name": "BaseBdev4", 00:16:36.246 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:36.246 "is_configured": true, 00:16:36.246 "data_offset": 2048, 00:16:36.246 "data_size": 63488 00:16:36.246 } 00:16:36.246 ] 00:16:36.246 }' 00:16:36.246 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.514 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.514 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.514 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.514 19:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.452 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.453 "name": "raid_bdev1", 00:16:37.453 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:37.453 "strip_size_kb": 64, 00:16:37.453 "state": "online", 00:16:37.453 "raid_level": "raid5f", 00:16:37.453 "superblock": true, 00:16:37.453 "num_base_bdevs": 4, 00:16:37.453 "num_base_bdevs_discovered": 4, 00:16:37.453 "num_base_bdevs_operational": 4, 00:16:37.453 "process": { 00:16:37.453 "type": "rebuild", 00:16:37.453 "target": "spare", 00:16:37.453 "progress": { 00:16:37.453 "blocks": 65280, 00:16:37.453 "percent": 34 00:16:37.453 } 00:16:37.453 }, 00:16:37.453 "base_bdevs_list": [ 00:16:37.453 { 00:16:37.453 "name": "spare", 00:16:37.453 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:37.453 "is_configured": true, 00:16:37.453 "data_offset": 2048, 00:16:37.453 "data_size": 63488 00:16:37.453 }, 00:16:37.453 { 00:16:37.453 "name": "BaseBdev2", 00:16:37.453 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:37.453 "is_configured": true, 00:16:37.453 "data_offset": 2048, 00:16:37.453 "data_size": 63488 00:16:37.453 }, 00:16:37.453 { 00:16:37.453 "name": "BaseBdev3", 00:16:37.453 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:37.453 "is_configured": true, 00:16:37.453 "data_offset": 2048, 00:16:37.453 "data_size": 63488 00:16:37.453 }, 00:16:37.453 { 00:16:37.453 "name": "BaseBdev4", 00:16:37.453 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:37.453 "is_configured": true, 00:16:37.453 "data_offset": 2048, 00:16:37.453 "data_size": 63488 00:16:37.453 } 00:16:37.453 ] 00:16:37.453 }' 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.453 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.712 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.712 19:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.651 "name": "raid_bdev1", 00:16:38.651 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:38.651 "strip_size_kb": 64, 00:16:38.651 "state": "online", 00:16:38.651 "raid_level": "raid5f", 00:16:38.651 "superblock": true, 00:16:38.651 "num_base_bdevs": 4, 00:16:38.651 "num_base_bdevs_discovered": 4, 00:16:38.651 "num_base_bdevs_operational": 4, 00:16:38.651 "process": { 00:16:38.651 "type": "rebuild", 00:16:38.651 "target": "spare", 00:16:38.651 "progress": { 00:16:38.651 "blocks": 86400, 00:16:38.651 "percent": 45 00:16:38.651 } 00:16:38.651 }, 00:16:38.651 "base_bdevs_list": [ 00:16:38.651 { 00:16:38.651 "name": "spare", 00:16:38.651 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:38.651 "is_configured": true, 00:16:38.651 "data_offset": 2048, 00:16:38.651 "data_size": 63488 00:16:38.651 }, 00:16:38.651 { 00:16:38.651 "name": "BaseBdev2", 00:16:38.651 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:38.651 "is_configured": true, 00:16:38.651 "data_offset": 2048, 00:16:38.651 "data_size": 63488 00:16:38.651 }, 00:16:38.651 { 00:16:38.651 "name": "BaseBdev3", 00:16:38.651 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:38.651 "is_configured": true, 00:16:38.651 "data_offset": 2048, 00:16:38.651 "data_size": 63488 00:16:38.651 }, 00:16:38.651 { 00:16:38.651 "name": "BaseBdev4", 00:16:38.651 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:38.651 "is_configured": true, 00:16:38.651 "data_offset": 2048, 00:16:38.651 "data_size": 63488 00:16:38.651 } 00:16:38.651 ] 00:16:38.651 }' 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.651 19:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.030 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.030 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.030 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.030 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.030 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.031 "name": "raid_bdev1", 00:16:40.031 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:40.031 "strip_size_kb": 64, 00:16:40.031 "state": "online", 00:16:40.031 "raid_level": "raid5f", 00:16:40.031 "superblock": true, 00:16:40.031 "num_base_bdevs": 4, 00:16:40.031 "num_base_bdevs_discovered": 4, 00:16:40.031 "num_base_bdevs_operational": 4, 00:16:40.031 "process": { 00:16:40.031 "type": "rebuild", 00:16:40.031 "target": "spare", 00:16:40.031 "progress": { 00:16:40.031 "blocks": 107520, 00:16:40.031 "percent": 56 00:16:40.031 } 00:16:40.031 }, 00:16:40.031 "base_bdevs_list": [ 00:16:40.031 { 00:16:40.031 "name": "spare", 00:16:40.031 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:40.031 "is_configured": true, 00:16:40.031 "data_offset": 2048, 00:16:40.031 "data_size": 63488 00:16:40.031 }, 00:16:40.031 { 00:16:40.031 "name": "BaseBdev2", 00:16:40.031 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:40.031 "is_configured": true, 00:16:40.031 "data_offset": 2048, 00:16:40.031 "data_size": 63488 00:16:40.031 }, 00:16:40.031 { 00:16:40.031 "name": "BaseBdev3", 00:16:40.031 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:40.031 "is_configured": true, 00:16:40.031 "data_offset": 2048, 00:16:40.031 "data_size": 63488 00:16:40.031 }, 00:16:40.031 { 00:16:40.031 "name": "BaseBdev4", 00:16:40.031 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:40.031 "is_configured": true, 00:16:40.031 "data_offset": 2048, 00:16:40.031 "data_size": 63488 00:16:40.031 } 00:16:40.031 ] 00:16:40.031 }' 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.031 19:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.970 "name": "raid_bdev1", 00:16:40.970 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:40.970 "strip_size_kb": 64, 00:16:40.970 "state": "online", 00:16:40.970 "raid_level": "raid5f", 00:16:40.970 "superblock": true, 00:16:40.970 "num_base_bdevs": 4, 00:16:40.970 "num_base_bdevs_discovered": 4, 00:16:40.970 "num_base_bdevs_operational": 4, 00:16:40.970 "process": { 00:16:40.970 "type": "rebuild", 00:16:40.970 "target": "spare", 00:16:40.970 "progress": { 00:16:40.970 "blocks": 130560, 00:16:40.970 "percent": 68 00:16:40.970 } 00:16:40.970 }, 00:16:40.970 "base_bdevs_list": [ 00:16:40.970 { 00:16:40.970 "name": "spare", 00:16:40.970 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:40.970 "is_configured": true, 00:16:40.970 "data_offset": 2048, 00:16:40.970 "data_size": 63488 00:16:40.970 }, 00:16:40.970 { 00:16:40.970 "name": "BaseBdev2", 00:16:40.970 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:40.970 "is_configured": true, 00:16:40.970 "data_offset": 2048, 00:16:40.970 "data_size": 63488 00:16:40.970 }, 00:16:40.970 { 00:16:40.970 "name": "BaseBdev3", 00:16:40.970 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:40.970 "is_configured": true, 00:16:40.970 "data_offset": 2048, 00:16:40.970 "data_size": 63488 00:16:40.970 }, 00:16:40.970 { 00:16:40.970 "name": "BaseBdev4", 00:16:40.970 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:40.970 "is_configured": true, 00:16:40.970 "data_offset": 2048, 00:16:40.970 "data_size": 63488 00:16:40.970 } 00:16:40.970 ] 00:16:40.970 }' 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.970 19:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.910 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.170 "name": "raid_bdev1", 00:16:42.170 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:42.170 "strip_size_kb": 64, 00:16:42.170 "state": "online", 00:16:42.170 "raid_level": "raid5f", 00:16:42.170 "superblock": true, 00:16:42.170 "num_base_bdevs": 4, 00:16:42.170 "num_base_bdevs_discovered": 4, 00:16:42.170 "num_base_bdevs_operational": 4, 00:16:42.170 "process": { 00:16:42.170 "type": "rebuild", 00:16:42.170 "target": "spare", 00:16:42.170 "progress": { 00:16:42.170 "blocks": 151680, 00:16:42.170 "percent": 79 00:16:42.170 } 00:16:42.170 }, 00:16:42.170 "base_bdevs_list": [ 00:16:42.170 { 00:16:42.170 "name": "spare", 00:16:42.170 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:42.170 "is_configured": true, 00:16:42.170 "data_offset": 2048, 00:16:42.170 "data_size": 63488 00:16:42.170 }, 00:16:42.170 { 00:16:42.170 "name": "BaseBdev2", 00:16:42.170 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:42.170 "is_configured": true, 00:16:42.170 "data_offset": 2048, 00:16:42.170 "data_size": 63488 00:16:42.170 }, 00:16:42.170 { 00:16:42.170 "name": "BaseBdev3", 00:16:42.170 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:42.170 "is_configured": true, 00:16:42.170 "data_offset": 2048, 00:16:42.170 "data_size": 63488 00:16:42.170 }, 00:16:42.170 { 00:16:42.170 "name": "BaseBdev4", 00:16:42.170 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:42.170 "is_configured": true, 00:16:42.170 "data_offset": 2048, 00:16:42.170 "data_size": 63488 00:16:42.170 } 00:16:42.170 ] 00:16:42.170 }' 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.170 19:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.110 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.110 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.110 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.110 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.111 "name": "raid_bdev1", 00:16:43.111 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:43.111 "strip_size_kb": 64, 00:16:43.111 "state": "online", 00:16:43.111 "raid_level": "raid5f", 00:16:43.111 "superblock": true, 00:16:43.111 "num_base_bdevs": 4, 00:16:43.111 "num_base_bdevs_discovered": 4, 00:16:43.111 "num_base_bdevs_operational": 4, 00:16:43.111 "process": { 00:16:43.111 "type": "rebuild", 00:16:43.111 "target": "spare", 00:16:43.111 "progress": { 00:16:43.111 "blocks": 174720, 00:16:43.111 "percent": 91 00:16:43.111 } 00:16:43.111 }, 00:16:43.111 "base_bdevs_list": [ 00:16:43.111 { 00:16:43.111 "name": "spare", 00:16:43.111 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:43.111 "is_configured": true, 00:16:43.111 "data_offset": 2048, 00:16:43.111 "data_size": 63488 00:16:43.111 }, 00:16:43.111 { 00:16:43.111 "name": "BaseBdev2", 00:16:43.111 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:43.111 "is_configured": true, 00:16:43.111 "data_offset": 2048, 00:16:43.111 "data_size": 63488 00:16:43.111 }, 00:16:43.111 { 00:16:43.111 "name": "BaseBdev3", 00:16:43.111 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:43.111 "is_configured": true, 00:16:43.111 "data_offset": 2048, 00:16:43.111 "data_size": 63488 00:16:43.111 }, 00:16:43.111 { 00:16:43.111 "name": "BaseBdev4", 00:16:43.111 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:43.111 "is_configured": true, 00:16:43.111 "data_offset": 2048, 00:16:43.111 "data_size": 63488 00:16:43.111 } 00:16:43.111 ] 00:16:43.111 }' 00:16:43.111 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.370 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.370 19:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.370 19:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.370 19:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.308 [2024-12-12 19:45:26.799604] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:44.308 [2024-12-12 19:45:26.799708] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:44.308 [2024-12-12 19:45:26.799882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.308 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.308 "name": "raid_bdev1", 00:16:44.308 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:44.308 "strip_size_kb": 64, 00:16:44.308 "state": "online", 00:16:44.308 "raid_level": "raid5f", 00:16:44.308 "superblock": true, 00:16:44.308 "num_base_bdevs": 4, 00:16:44.308 "num_base_bdevs_discovered": 4, 00:16:44.308 "num_base_bdevs_operational": 4, 00:16:44.308 "base_bdevs_list": [ 00:16:44.308 { 00:16:44.308 "name": "spare", 00:16:44.308 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:44.308 "is_configured": true, 00:16:44.308 "data_offset": 2048, 00:16:44.308 "data_size": 63488 00:16:44.308 }, 00:16:44.308 { 00:16:44.308 "name": "BaseBdev2", 00:16:44.308 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:44.308 "is_configured": true, 00:16:44.308 "data_offset": 2048, 00:16:44.308 "data_size": 63488 00:16:44.308 }, 00:16:44.308 { 00:16:44.308 "name": "BaseBdev3", 00:16:44.308 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:44.308 "is_configured": true, 00:16:44.308 "data_offset": 2048, 00:16:44.308 "data_size": 63488 00:16:44.308 }, 00:16:44.308 { 00:16:44.308 "name": "BaseBdev4", 00:16:44.308 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:44.308 "is_configured": true, 00:16:44.308 "data_offset": 2048, 00:16:44.308 "data_size": 63488 00:16:44.308 } 00:16:44.308 ] 00:16:44.308 }' 00:16:44.309 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.309 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:44.309 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.567 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.567 "name": "raid_bdev1", 00:16:44.567 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:44.567 "strip_size_kb": 64, 00:16:44.567 "state": "online", 00:16:44.567 "raid_level": "raid5f", 00:16:44.567 "superblock": true, 00:16:44.567 "num_base_bdevs": 4, 00:16:44.567 "num_base_bdevs_discovered": 4, 00:16:44.567 "num_base_bdevs_operational": 4, 00:16:44.567 "base_bdevs_list": [ 00:16:44.567 { 00:16:44.567 "name": "spare", 00:16:44.567 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:44.567 "is_configured": true, 00:16:44.567 "data_offset": 2048, 00:16:44.567 "data_size": 63488 00:16:44.567 }, 00:16:44.567 { 00:16:44.567 "name": "BaseBdev2", 00:16:44.567 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:44.567 "is_configured": true, 00:16:44.567 "data_offset": 2048, 00:16:44.567 "data_size": 63488 00:16:44.568 }, 00:16:44.568 { 00:16:44.568 "name": "BaseBdev3", 00:16:44.568 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 }, 00:16:44.568 { 00:16:44.568 "name": "BaseBdev4", 00:16:44.568 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 } 00:16:44.568 ] 00:16:44.568 }' 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.568 "name": "raid_bdev1", 00:16:44.568 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:44.568 "strip_size_kb": 64, 00:16:44.568 "state": "online", 00:16:44.568 "raid_level": "raid5f", 00:16:44.568 "superblock": true, 00:16:44.568 "num_base_bdevs": 4, 00:16:44.568 "num_base_bdevs_discovered": 4, 00:16:44.568 "num_base_bdevs_operational": 4, 00:16:44.568 "base_bdevs_list": [ 00:16:44.568 { 00:16:44.568 "name": "spare", 00:16:44.568 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 }, 00:16:44.568 { 00:16:44.568 "name": "BaseBdev2", 00:16:44.568 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 }, 00:16:44.568 { 00:16:44.568 "name": "BaseBdev3", 00:16:44.568 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 }, 00:16:44.568 { 00:16:44.568 "name": "BaseBdev4", 00:16:44.568 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:44.568 "is_configured": true, 00:16:44.568 "data_offset": 2048, 00:16:44.568 "data_size": 63488 00:16:44.568 } 00:16:44.568 ] 00:16:44.568 }' 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.568 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.136 [2024-12-12 19:45:27.770968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.136 [2024-12-12 19:45:27.771040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.136 [2024-12-12 19:45:27.771154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.136 [2024-12-12 19:45:27.771284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.136 [2024-12-12 19:45:27.771347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.136 19:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:45.396 /dev/nbd0 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.396 1+0 records in 00:16:45.396 1+0 records out 00:16:45.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589593 s, 6.9 MB/s 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.396 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:45.656 /dev/nbd1 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.656 1+0 records in 00:16:45.656 1+0 records out 00:16:45.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290194 s, 14.1 MB/s 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.656 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.915 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.175 [2024-12-12 19:45:28.921208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.175 [2024-12-12 19:45:28.921269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.175 [2024-12-12 19:45:28.921293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:46.175 [2024-12-12 19:45:28.921304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.175 [2024-12-12 19:45:28.923457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.175 [2024-12-12 19:45:28.923499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.175 [2024-12-12 19:45:28.923592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:46.175 [2024-12-12 19:45:28.923644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.175 [2024-12-12 19:45:28.923786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.175 [2024-12-12 19:45:28.923912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.175 [2024-12-12 19:45:28.924005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.175 spare 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.175 19:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.434 [2024-12-12 19:45:29.023895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:46.435 [2024-12-12 19:45:29.023924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.435 [2024-12-12 19:45:29.024173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:46.435 [2024-12-12 19:45:29.030710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:46.435 [2024-12-12 19:45:29.030730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:46.435 [2024-12-12 19:45:29.030893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.435 "name": "raid_bdev1", 00:16:46.435 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:46.435 "strip_size_kb": 64, 00:16:46.435 "state": "online", 00:16:46.435 "raid_level": "raid5f", 00:16:46.435 "superblock": true, 00:16:46.435 "num_base_bdevs": 4, 00:16:46.435 "num_base_bdevs_discovered": 4, 00:16:46.435 "num_base_bdevs_operational": 4, 00:16:46.435 "base_bdevs_list": [ 00:16:46.435 { 00:16:46.435 "name": "spare", 00:16:46.435 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:46.435 "is_configured": true, 00:16:46.435 "data_offset": 2048, 00:16:46.435 "data_size": 63488 00:16:46.435 }, 00:16:46.435 { 00:16:46.435 "name": "BaseBdev2", 00:16:46.435 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:46.435 "is_configured": true, 00:16:46.435 "data_offset": 2048, 00:16:46.435 "data_size": 63488 00:16:46.435 }, 00:16:46.435 { 00:16:46.435 "name": "BaseBdev3", 00:16:46.435 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:46.435 "is_configured": true, 00:16:46.435 "data_offset": 2048, 00:16:46.435 "data_size": 63488 00:16:46.435 }, 00:16:46.435 { 00:16:46.435 "name": "BaseBdev4", 00:16:46.435 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:46.435 "is_configured": true, 00:16:46.435 "data_offset": 2048, 00:16:46.435 "data_size": 63488 00:16:46.435 } 00:16:46.435 ] 00:16:46.435 }' 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.435 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.694 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.694 "name": "raid_bdev1", 00:16:46.694 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:46.694 "strip_size_kb": 64, 00:16:46.694 "state": "online", 00:16:46.694 "raid_level": "raid5f", 00:16:46.694 "superblock": true, 00:16:46.694 "num_base_bdevs": 4, 00:16:46.694 "num_base_bdevs_discovered": 4, 00:16:46.694 "num_base_bdevs_operational": 4, 00:16:46.694 "base_bdevs_list": [ 00:16:46.694 { 00:16:46.694 "name": "spare", 00:16:46.694 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:46.694 "is_configured": true, 00:16:46.694 "data_offset": 2048, 00:16:46.694 "data_size": 63488 00:16:46.694 }, 00:16:46.694 { 00:16:46.694 "name": "BaseBdev2", 00:16:46.694 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:46.694 "is_configured": true, 00:16:46.694 "data_offset": 2048, 00:16:46.694 "data_size": 63488 00:16:46.694 }, 00:16:46.694 { 00:16:46.694 "name": "BaseBdev3", 00:16:46.694 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:46.694 "is_configured": true, 00:16:46.694 "data_offset": 2048, 00:16:46.694 "data_size": 63488 00:16:46.694 }, 00:16:46.694 { 00:16:46.694 "name": "BaseBdev4", 00:16:46.694 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:46.694 "is_configured": true, 00:16:46.694 "data_offset": 2048, 00:16:46.694 "data_size": 63488 00:16:46.694 } 00:16:46.694 ] 00:16:46.694 }' 00:16:46.695 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.954 [2024-12-12 19:45:29.658396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.954 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.954 "name": "raid_bdev1", 00:16:46.954 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:46.954 "strip_size_kb": 64, 00:16:46.954 "state": "online", 00:16:46.954 "raid_level": "raid5f", 00:16:46.954 "superblock": true, 00:16:46.954 "num_base_bdevs": 4, 00:16:46.954 "num_base_bdevs_discovered": 3, 00:16:46.954 "num_base_bdevs_operational": 3, 00:16:46.954 "base_bdevs_list": [ 00:16:46.954 { 00:16:46.954 "name": null, 00:16:46.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.954 "is_configured": false, 00:16:46.954 "data_offset": 0, 00:16:46.954 "data_size": 63488 00:16:46.954 }, 00:16:46.954 { 00:16:46.954 "name": "BaseBdev2", 00:16:46.954 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:46.954 "is_configured": true, 00:16:46.954 "data_offset": 2048, 00:16:46.954 "data_size": 63488 00:16:46.954 }, 00:16:46.954 { 00:16:46.954 "name": "BaseBdev3", 00:16:46.954 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:46.955 "is_configured": true, 00:16:46.955 "data_offset": 2048, 00:16:46.955 "data_size": 63488 00:16:46.955 }, 00:16:46.955 { 00:16:46.955 "name": "BaseBdev4", 00:16:46.955 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:46.955 "is_configured": true, 00:16:46.955 "data_offset": 2048, 00:16:46.955 "data_size": 63488 00:16:46.955 } 00:16:46.955 ] 00:16:46.955 }' 00:16:46.955 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.955 19:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.523 19:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.523 19:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.523 19:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.523 [2024-12-12 19:45:30.074421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.524 [2024-12-12 19:45:30.074674] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.524 [2024-12-12 19:45:30.074741] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:47.524 [2024-12-12 19:45:30.074838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.524 [2024-12-12 19:45:30.088837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:47.524 19:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.524 19:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:47.524 [2024-12-12 19:45:30.097748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.461 "name": "raid_bdev1", 00:16:48.461 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:48.461 "strip_size_kb": 64, 00:16:48.461 "state": "online", 00:16:48.461 "raid_level": "raid5f", 00:16:48.461 "superblock": true, 00:16:48.461 "num_base_bdevs": 4, 00:16:48.461 "num_base_bdevs_discovered": 4, 00:16:48.461 "num_base_bdevs_operational": 4, 00:16:48.461 "process": { 00:16:48.461 "type": "rebuild", 00:16:48.461 "target": "spare", 00:16:48.461 "progress": { 00:16:48.461 "blocks": 19200, 00:16:48.461 "percent": 10 00:16:48.461 } 00:16:48.461 }, 00:16:48.461 "base_bdevs_list": [ 00:16:48.461 { 00:16:48.461 "name": "spare", 00:16:48.461 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:48.461 "is_configured": true, 00:16:48.461 "data_offset": 2048, 00:16:48.461 "data_size": 63488 00:16:48.461 }, 00:16:48.461 { 00:16:48.461 "name": "BaseBdev2", 00:16:48.461 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:48.461 "is_configured": true, 00:16:48.461 "data_offset": 2048, 00:16:48.461 "data_size": 63488 00:16:48.461 }, 00:16:48.461 { 00:16:48.461 "name": "BaseBdev3", 00:16:48.461 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:48.461 "is_configured": true, 00:16:48.461 "data_offset": 2048, 00:16:48.461 "data_size": 63488 00:16:48.461 }, 00:16:48.461 { 00:16:48.461 "name": "BaseBdev4", 00:16:48.461 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:48.461 "is_configured": true, 00:16:48.461 "data_offset": 2048, 00:16:48.461 "data_size": 63488 00:16:48.461 } 00:16:48.461 ] 00:16:48.461 }' 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.461 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.461 [2024-12-12 19:45:31.248607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.461 [2024-12-12 19:45:31.303590] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.461 [2024-12-12 19:45:31.303664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.461 [2024-12-12 19:45:31.303682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.461 [2024-12-12 19:45:31.303692] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.721 "name": "raid_bdev1", 00:16:48.721 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:48.721 "strip_size_kb": 64, 00:16:48.721 "state": "online", 00:16:48.721 "raid_level": "raid5f", 00:16:48.721 "superblock": true, 00:16:48.721 "num_base_bdevs": 4, 00:16:48.721 "num_base_bdevs_discovered": 3, 00:16:48.721 "num_base_bdevs_operational": 3, 00:16:48.721 "base_bdevs_list": [ 00:16:48.721 { 00:16:48.721 "name": null, 00:16:48.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.721 "is_configured": false, 00:16:48.721 "data_offset": 0, 00:16:48.721 "data_size": 63488 00:16:48.721 }, 00:16:48.721 { 00:16:48.721 "name": "BaseBdev2", 00:16:48.721 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:48.721 "is_configured": true, 00:16:48.721 "data_offset": 2048, 00:16:48.721 "data_size": 63488 00:16:48.721 }, 00:16:48.721 { 00:16:48.721 "name": "BaseBdev3", 00:16:48.721 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:48.721 "is_configured": true, 00:16:48.721 "data_offset": 2048, 00:16:48.721 "data_size": 63488 00:16:48.721 }, 00:16:48.721 { 00:16:48.721 "name": "BaseBdev4", 00:16:48.721 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:48.721 "is_configured": true, 00:16:48.721 "data_offset": 2048, 00:16:48.721 "data_size": 63488 00:16:48.721 } 00:16:48.721 ] 00:16:48.721 }' 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.721 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.981 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.982 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.982 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.982 [2024-12-12 19:45:31.756217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.982 [2024-12-12 19:45:31.756317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.982 [2024-12-12 19:45:31.756360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:48.982 [2024-12-12 19:45:31.756388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.982 [2024-12-12 19:45:31.756910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.982 [2024-12-12 19:45:31.756978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.982 [2024-12-12 19:45:31.757120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:48.982 [2024-12-12 19:45:31.757163] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.982 [2024-12-12 19:45:31.757223] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:48.982 [2024-12-12 19:45:31.757285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.982 [2024-12-12 19:45:31.771382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:48.982 spare 00:16:48.982 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.982 19:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:48.982 [2024-12-12 19:45:31.779578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.362 "name": "raid_bdev1", 00:16:50.362 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:50.362 "strip_size_kb": 64, 00:16:50.362 "state": "online", 00:16:50.362 "raid_level": "raid5f", 00:16:50.362 "superblock": true, 00:16:50.362 "num_base_bdevs": 4, 00:16:50.362 "num_base_bdevs_discovered": 4, 00:16:50.362 "num_base_bdevs_operational": 4, 00:16:50.362 "process": { 00:16:50.362 "type": "rebuild", 00:16:50.362 "target": "spare", 00:16:50.362 "progress": { 00:16:50.362 "blocks": 19200, 00:16:50.362 "percent": 10 00:16:50.362 } 00:16:50.362 }, 00:16:50.362 "base_bdevs_list": [ 00:16:50.362 { 00:16:50.362 "name": "spare", 00:16:50.362 "uuid": "b9d0f9af-9bb3-5ea5-9ab3-3fc1043a858d", 00:16:50.362 "is_configured": true, 00:16:50.362 "data_offset": 2048, 00:16:50.362 "data_size": 63488 00:16:50.362 }, 00:16:50.362 { 00:16:50.362 "name": "BaseBdev2", 00:16:50.362 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:50.362 "is_configured": true, 00:16:50.362 "data_offset": 2048, 00:16:50.362 "data_size": 63488 00:16:50.362 }, 00:16:50.362 { 00:16:50.362 "name": "BaseBdev3", 00:16:50.362 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:50.362 "is_configured": true, 00:16:50.362 "data_offset": 2048, 00:16:50.362 "data_size": 63488 00:16:50.362 }, 00:16:50.362 { 00:16:50.362 "name": "BaseBdev4", 00:16:50.362 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:50.362 "is_configured": true, 00:16:50.362 "data_offset": 2048, 00:16:50.362 "data_size": 63488 00:16:50.362 } 00:16:50.362 ] 00:16:50.362 }' 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.362 19:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.363 [2024-12-12 19:45:32.930464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.363 [2024-12-12 19:45:32.985341] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:50.363 [2024-12-12 19:45:32.985429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.363 [2024-12-12 19:45:32.985450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.363 [2024-12-12 19:45:32.985457] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.363 "name": "raid_bdev1", 00:16:50.363 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:50.363 "strip_size_kb": 64, 00:16:50.363 "state": "online", 00:16:50.363 "raid_level": "raid5f", 00:16:50.363 "superblock": true, 00:16:50.363 "num_base_bdevs": 4, 00:16:50.363 "num_base_bdevs_discovered": 3, 00:16:50.363 "num_base_bdevs_operational": 3, 00:16:50.363 "base_bdevs_list": [ 00:16:50.363 { 00:16:50.363 "name": null, 00:16:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.363 "is_configured": false, 00:16:50.363 "data_offset": 0, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev2", 00:16:50.363 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev3", 00:16:50.363 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 }, 00:16:50.363 { 00:16:50.363 "name": "BaseBdev4", 00:16:50.363 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:50.363 "is_configured": true, 00:16:50.363 "data_offset": 2048, 00:16:50.363 "data_size": 63488 00:16:50.363 } 00:16:50.363 ] 00:16:50.363 }' 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.363 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.622 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.623 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.882 "name": "raid_bdev1", 00:16:50.882 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:50.882 "strip_size_kb": 64, 00:16:50.882 "state": "online", 00:16:50.882 "raid_level": "raid5f", 00:16:50.882 "superblock": true, 00:16:50.882 "num_base_bdevs": 4, 00:16:50.882 "num_base_bdevs_discovered": 3, 00:16:50.882 "num_base_bdevs_operational": 3, 00:16:50.882 "base_bdevs_list": [ 00:16:50.882 { 00:16:50.882 "name": null, 00:16:50.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.882 "is_configured": false, 00:16:50.882 "data_offset": 0, 00:16:50.882 "data_size": 63488 00:16:50.882 }, 00:16:50.882 { 00:16:50.882 "name": "BaseBdev2", 00:16:50.882 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:50.882 "is_configured": true, 00:16:50.882 "data_offset": 2048, 00:16:50.882 "data_size": 63488 00:16:50.882 }, 00:16:50.882 { 00:16:50.882 "name": "BaseBdev3", 00:16:50.882 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:50.882 "is_configured": true, 00:16:50.882 "data_offset": 2048, 00:16:50.882 "data_size": 63488 00:16:50.882 }, 00:16:50.882 { 00:16:50.882 "name": "BaseBdev4", 00:16:50.882 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:50.882 "is_configured": true, 00:16:50.882 "data_offset": 2048, 00:16:50.882 "data_size": 63488 00:16:50.882 } 00:16:50.882 ] 00:16:50.882 }' 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.882 [2024-12-12 19:45:33.605118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.882 [2024-12-12 19:45:33.605169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.882 [2024-12-12 19:45:33.605190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:50.882 [2024-12-12 19:45:33.605199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.882 [2024-12-12 19:45:33.605650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.882 [2024-12-12 19:45:33.605690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.882 [2024-12-12 19:45:33.605792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:50.882 [2024-12-12 19:45:33.605805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:50.882 [2024-12-12 19:45:33.605816] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:50.882 [2024-12-12 19:45:33.605825] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:50.882 BaseBdev1 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.882 19:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.820 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.821 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.080 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.080 "name": "raid_bdev1", 00:16:52.080 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:52.080 "strip_size_kb": 64, 00:16:52.080 "state": "online", 00:16:52.080 "raid_level": "raid5f", 00:16:52.080 "superblock": true, 00:16:52.080 "num_base_bdevs": 4, 00:16:52.080 "num_base_bdevs_discovered": 3, 00:16:52.080 "num_base_bdevs_operational": 3, 00:16:52.080 "base_bdevs_list": [ 00:16:52.080 { 00:16:52.080 "name": null, 00:16:52.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.080 "is_configured": false, 00:16:52.080 "data_offset": 0, 00:16:52.080 "data_size": 63488 00:16:52.080 }, 00:16:52.080 { 00:16:52.080 "name": "BaseBdev2", 00:16:52.080 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:52.080 "is_configured": true, 00:16:52.080 "data_offset": 2048, 00:16:52.080 "data_size": 63488 00:16:52.080 }, 00:16:52.080 { 00:16:52.080 "name": "BaseBdev3", 00:16:52.080 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:52.080 "is_configured": true, 00:16:52.080 "data_offset": 2048, 00:16:52.080 "data_size": 63488 00:16:52.080 }, 00:16:52.080 { 00:16:52.080 "name": "BaseBdev4", 00:16:52.080 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:52.080 "is_configured": true, 00:16:52.080 "data_offset": 2048, 00:16:52.080 "data_size": 63488 00:16:52.080 } 00:16:52.080 ] 00:16:52.080 }' 00:16:52.080 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.080 19:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.340 "name": "raid_bdev1", 00:16:52.340 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:52.340 "strip_size_kb": 64, 00:16:52.340 "state": "online", 00:16:52.340 "raid_level": "raid5f", 00:16:52.340 "superblock": true, 00:16:52.340 "num_base_bdevs": 4, 00:16:52.340 "num_base_bdevs_discovered": 3, 00:16:52.340 "num_base_bdevs_operational": 3, 00:16:52.340 "base_bdevs_list": [ 00:16:52.340 { 00:16:52.340 "name": null, 00:16:52.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.340 "is_configured": false, 00:16:52.340 "data_offset": 0, 00:16:52.340 "data_size": 63488 00:16:52.340 }, 00:16:52.340 { 00:16:52.340 "name": "BaseBdev2", 00:16:52.340 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:52.340 "is_configured": true, 00:16:52.340 "data_offset": 2048, 00:16:52.340 "data_size": 63488 00:16:52.340 }, 00:16:52.340 { 00:16:52.340 "name": "BaseBdev3", 00:16:52.340 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:52.340 "is_configured": true, 00:16:52.340 "data_offset": 2048, 00:16:52.340 "data_size": 63488 00:16:52.340 }, 00:16:52.340 { 00:16:52.340 "name": "BaseBdev4", 00:16:52.340 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:52.340 "is_configured": true, 00:16:52.340 "data_offset": 2048, 00:16:52.340 "data_size": 63488 00:16:52.340 } 00:16:52.340 ] 00:16:52.340 }' 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.340 [2024-12-12 19:45:35.166646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.340 [2024-12-12 19:45:35.166822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:52.340 [2024-12-12 19:45:35.166841] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:52.340 request: 00:16:52.340 { 00:16:52.340 "base_bdev": "BaseBdev1", 00:16:52.340 "raid_bdev": "raid_bdev1", 00:16:52.340 "method": "bdev_raid_add_base_bdev", 00:16:52.340 "req_id": 1 00:16:52.340 } 00:16:52.340 Got JSON-RPC error response 00:16:52.340 response: 00:16:52.340 { 00:16:52.340 "code": -22, 00:16:52.340 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:52.340 } 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:52.340 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.341 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.341 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.341 19:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.720 "name": "raid_bdev1", 00:16:53.720 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:53.720 "strip_size_kb": 64, 00:16:53.720 "state": "online", 00:16:53.720 "raid_level": "raid5f", 00:16:53.720 "superblock": true, 00:16:53.720 "num_base_bdevs": 4, 00:16:53.720 "num_base_bdevs_discovered": 3, 00:16:53.720 "num_base_bdevs_operational": 3, 00:16:53.720 "base_bdevs_list": [ 00:16:53.720 { 00:16:53.720 "name": null, 00:16:53.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.720 "is_configured": false, 00:16:53.720 "data_offset": 0, 00:16:53.720 "data_size": 63488 00:16:53.720 }, 00:16:53.720 { 00:16:53.720 "name": "BaseBdev2", 00:16:53.720 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:53.720 "is_configured": true, 00:16:53.720 "data_offset": 2048, 00:16:53.720 "data_size": 63488 00:16:53.720 }, 00:16:53.720 { 00:16:53.720 "name": "BaseBdev3", 00:16:53.720 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:53.720 "is_configured": true, 00:16:53.720 "data_offset": 2048, 00:16:53.720 "data_size": 63488 00:16:53.720 }, 00:16:53.720 { 00:16:53.720 "name": "BaseBdev4", 00:16:53.720 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:53.720 "is_configured": true, 00:16:53.720 "data_offset": 2048, 00:16:53.720 "data_size": 63488 00:16:53.720 } 00:16:53.720 ] 00:16:53.720 }' 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.720 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.979 "name": "raid_bdev1", 00:16:53.979 "uuid": "1fe80e3a-571b-4efb-b799-91a0289fc869", 00:16:53.979 "strip_size_kb": 64, 00:16:53.979 "state": "online", 00:16:53.979 "raid_level": "raid5f", 00:16:53.979 "superblock": true, 00:16:53.979 "num_base_bdevs": 4, 00:16:53.979 "num_base_bdevs_discovered": 3, 00:16:53.979 "num_base_bdevs_operational": 3, 00:16:53.979 "base_bdevs_list": [ 00:16:53.979 { 00:16:53.979 "name": null, 00:16:53.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.979 "is_configured": false, 00:16:53.979 "data_offset": 0, 00:16:53.979 "data_size": 63488 00:16:53.979 }, 00:16:53.979 { 00:16:53.979 "name": "BaseBdev2", 00:16:53.979 "uuid": "fef3a924-cdf7-5b5e-8102-dd16c744774d", 00:16:53.979 "is_configured": true, 00:16:53.979 "data_offset": 2048, 00:16:53.979 "data_size": 63488 00:16:53.979 }, 00:16:53.979 { 00:16:53.979 "name": "BaseBdev3", 00:16:53.979 "uuid": "dcc05358-05d4-5829-8d90-e1c149b4ae90", 00:16:53.979 "is_configured": true, 00:16:53.979 "data_offset": 2048, 00:16:53.979 "data_size": 63488 00:16:53.979 }, 00:16:53.979 { 00:16:53.979 "name": "BaseBdev4", 00:16:53.979 "uuid": "cb1dc1a1-d231-566a-b370-84c390f948c1", 00:16:53.979 "is_configured": true, 00:16:53.979 "data_offset": 2048, 00:16:53.979 "data_size": 63488 00:16:53.979 } 00:16:53.979 ] 00:16:53.979 }' 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.979 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86764 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86764 ']' 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86764 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86764 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.980 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.239 killing process with pid 86764 00:16:54.239 Received shutdown signal, test time was about 60.000000 seconds 00:16:54.239 00:16:54.239 Latency(us) 00:16:54.239 [2024-12-12T19:45:37.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.239 [2024-12-12T19:45:37.084Z] =================================================================================================================== 00:16:54.239 [2024-12-12T19:45:37.084Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.239 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86764' 00:16:54.239 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86764 00:16:54.239 [2024-12-12 19:45:36.824614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.239 [2024-12-12 19:45:36.824735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.239 19:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86764 00:16:54.239 [2024-12-12 19:45:36.824823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.239 [2024-12-12 19:45:36.824835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:54.499 [2024-12-12 19:45:37.282719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.880 19:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:55.880 ************************************ 00:16:55.880 END TEST raid5f_rebuild_test_sb 00:16:55.880 ************************************ 00:16:55.880 00:16:55.880 real 0m26.459s 00:16:55.880 user 0m32.979s 00:16:55.880 sys 0m2.993s 00:16:55.880 19:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.880 19:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.880 19:45:38 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:55.881 19:45:38 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:55.881 19:45:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:55.881 19:45:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.881 19:45:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.881 ************************************ 00:16:55.881 START TEST raid_state_function_test_sb_4k 00:16:55.881 ************************************ 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=87574 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87574' 00:16:55.881 Process raid pid: 87574 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 87574 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87574 ']' 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.881 19:45:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.881 [2024-12-12 19:45:38.482510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:55.881 [2024-12-12 19:45:38.482716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.881 [2024-12-12 19:45:38.664592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.141 [2024-12-12 19:45:38.771842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.141 [2024-12-12 19:45:38.956139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.141 [2024-12-12 19:45:38.956175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 [2024-12-12 19:45:39.300886] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.711 [2024-12-12 19:45:39.300938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.711 [2024-12-12 19:45:39.300955] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.711 [2024-12-12 19:45:39.300965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.711 "name": "Existed_Raid", 00:16:56.711 "uuid": "5d2132d0-df28-4699-b3de-bfadf4fae0b4", 00:16:56.711 "strip_size_kb": 0, 00:16:56.711 "state": "configuring", 00:16:56.711 "raid_level": "raid1", 00:16:56.711 "superblock": true, 00:16:56.711 "num_base_bdevs": 2, 00:16:56.711 "num_base_bdevs_discovered": 0, 00:16:56.711 "num_base_bdevs_operational": 2, 00:16:56.711 "base_bdevs_list": [ 00:16:56.711 { 00:16:56.711 "name": "BaseBdev1", 00:16:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.711 "is_configured": false, 00:16:56.711 "data_offset": 0, 00:16:56.711 "data_size": 0 00:16:56.711 }, 00:16:56.711 { 00:16:56.711 "name": "BaseBdev2", 00:16:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.711 "is_configured": false, 00:16:56.711 "data_offset": 0, 00:16:56.711 "data_size": 0 00:16:56.711 } 00:16:56.711 ] 00:16:56.711 }' 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.711 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 [2024-12-12 19:45:39.740123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.971 [2024-12-12 19:45:39.740198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 [2024-12-12 19:45:39.748109] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.971 [2024-12-12 19:45:39.748184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.971 [2024-12-12 19:45:39.748210] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.971 [2024-12-12 19:45:39.748234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 [2024-12-12 19:45:39.789340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.971 BaseBdev1 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 [ 00:16:57.231 { 00:16:57.231 "name": "BaseBdev1", 00:16:57.231 "aliases": [ 00:16:57.231 "d27bed3c-7d23-44d6-85a9-8ce823d5f300" 00:16:57.231 ], 00:16:57.231 "product_name": "Malloc disk", 00:16:57.231 "block_size": 4096, 00:16:57.231 "num_blocks": 8192, 00:16:57.231 "uuid": "d27bed3c-7d23-44d6-85a9-8ce823d5f300", 00:16:57.231 "assigned_rate_limits": { 00:16:57.231 "rw_ios_per_sec": 0, 00:16:57.231 "rw_mbytes_per_sec": 0, 00:16:57.231 "r_mbytes_per_sec": 0, 00:16:57.231 "w_mbytes_per_sec": 0 00:16:57.231 }, 00:16:57.231 "claimed": true, 00:16:57.231 "claim_type": "exclusive_write", 00:16:57.231 "zoned": false, 00:16:57.231 "supported_io_types": { 00:16:57.231 "read": true, 00:16:57.231 "write": true, 00:16:57.231 "unmap": true, 00:16:57.231 "flush": true, 00:16:57.231 "reset": true, 00:16:57.231 "nvme_admin": false, 00:16:57.231 "nvme_io": false, 00:16:57.231 "nvme_io_md": false, 00:16:57.231 "write_zeroes": true, 00:16:57.231 "zcopy": true, 00:16:57.231 "get_zone_info": false, 00:16:57.231 "zone_management": false, 00:16:57.231 "zone_append": false, 00:16:57.231 "compare": false, 00:16:57.231 "compare_and_write": false, 00:16:57.231 "abort": true, 00:16:57.231 "seek_hole": false, 00:16:57.231 "seek_data": false, 00:16:57.231 "copy": true, 00:16:57.232 "nvme_iov_md": false 00:16:57.232 }, 00:16:57.232 "memory_domains": [ 00:16:57.232 { 00:16:57.232 "dma_device_id": "system", 00:16:57.232 "dma_device_type": 1 00:16:57.232 }, 00:16:57.232 { 00:16:57.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.232 "dma_device_type": 2 00:16:57.232 } 00:16:57.232 ], 00:16:57.232 "driver_specific": {} 00:16:57.232 } 00:16:57.232 ] 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.232 "name": "Existed_Raid", 00:16:57.232 "uuid": "dd407be4-f159-4dcf-8cdc-e603a9b2ed79", 00:16:57.232 "strip_size_kb": 0, 00:16:57.232 "state": "configuring", 00:16:57.232 "raid_level": "raid1", 00:16:57.232 "superblock": true, 00:16:57.232 "num_base_bdevs": 2, 00:16:57.232 "num_base_bdevs_discovered": 1, 00:16:57.232 "num_base_bdevs_operational": 2, 00:16:57.232 "base_bdevs_list": [ 00:16:57.232 { 00:16:57.232 "name": "BaseBdev1", 00:16:57.232 "uuid": "d27bed3c-7d23-44d6-85a9-8ce823d5f300", 00:16:57.232 "is_configured": true, 00:16:57.232 "data_offset": 256, 00:16:57.232 "data_size": 7936 00:16:57.232 }, 00:16:57.232 { 00:16:57.232 "name": "BaseBdev2", 00:16:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.232 "is_configured": false, 00:16:57.232 "data_offset": 0, 00:16:57.232 "data_size": 0 00:16:57.232 } 00:16:57.232 ] 00:16:57.232 }' 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.232 19:45:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 [2024-12-12 19:45:40.264529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.492 [2024-12-12 19:45:40.264578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 [2024-12-12 19:45:40.276570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.492 [2024-12-12 19:45:40.278308] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.492 [2024-12-12 19:45:40.278360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.492 "name": "Existed_Raid", 00:16:57.492 "uuid": "890645f8-dec4-4473-bd60-7b7b582e067e", 00:16:57.492 "strip_size_kb": 0, 00:16:57.492 "state": "configuring", 00:16:57.492 "raid_level": "raid1", 00:16:57.492 "superblock": true, 00:16:57.492 "num_base_bdevs": 2, 00:16:57.492 "num_base_bdevs_discovered": 1, 00:16:57.492 "num_base_bdevs_operational": 2, 00:16:57.492 "base_bdevs_list": [ 00:16:57.492 { 00:16:57.492 "name": "BaseBdev1", 00:16:57.492 "uuid": "d27bed3c-7d23-44d6-85a9-8ce823d5f300", 00:16:57.492 "is_configured": true, 00:16:57.492 "data_offset": 256, 00:16:57.492 "data_size": 7936 00:16:57.492 }, 00:16:57.492 { 00:16:57.492 "name": "BaseBdev2", 00:16:57.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.492 "is_configured": false, 00:16:57.492 "data_offset": 0, 00:16:57.492 "data_size": 0 00:16:57.492 } 00:16:57.492 ] 00:16:57.492 }' 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.492 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 [2024-12-12 19:45:40.787235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.063 [2024-12-12 19:45:40.787601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.063 [2024-12-12 19:45:40.787653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.063 [2024-12-12 19:45:40.787954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:58.063 [2024-12-12 19:45:40.788158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.063 BaseBdev2 00:16:58.063 [2024-12-12 19:45:40.788213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:58.063 [2024-12-12 19:45:40.788432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 [ 00:16:58.063 { 00:16:58.063 "name": "BaseBdev2", 00:16:58.063 "aliases": [ 00:16:58.063 "9cfaa18a-a337-4f69-95c8-15a3f1cb73da" 00:16:58.063 ], 00:16:58.063 "product_name": "Malloc disk", 00:16:58.063 "block_size": 4096, 00:16:58.063 "num_blocks": 8192, 00:16:58.063 "uuid": "9cfaa18a-a337-4f69-95c8-15a3f1cb73da", 00:16:58.063 "assigned_rate_limits": { 00:16:58.063 "rw_ios_per_sec": 0, 00:16:58.063 "rw_mbytes_per_sec": 0, 00:16:58.063 "r_mbytes_per_sec": 0, 00:16:58.063 "w_mbytes_per_sec": 0 00:16:58.063 }, 00:16:58.063 "claimed": true, 00:16:58.063 "claim_type": "exclusive_write", 00:16:58.063 "zoned": false, 00:16:58.063 "supported_io_types": { 00:16:58.063 "read": true, 00:16:58.063 "write": true, 00:16:58.063 "unmap": true, 00:16:58.063 "flush": true, 00:16:58.063 "reset": true, 00:16:58.063 "nvme_admin": false, 00:16:58.063 "nvme_io": false, 00:16:58.063 "nvme_io_md": false, 00:16:58.063 "write_zeroes": true, 00:16:58.063 "zcopy": true, 00:16:58.063 "get_zone_info": false, 00:16:58.063 "zone_management": false, 00:16:58.063 "zone_append": false, 00:16:58.063 "compare": false, 00:16:58.063 "compare_and_write": false, 00:16:58.063 "abort": true, 00:16:58.063 "seek_hole": false, 00:16:58.063 "seek_data": false, 00:16:58.063 "copy": true, 00:16:58.063 "nvme_iov_md": false 00:16:58.063 }, 00:16:58.063 "memory_domains": [ 00:16:58.063 { 00:16:58.063 "dma_device_id": "system", 00:16:58.063 "dma_device_type": 1 00:16:58.063 }, 00:16:58.063 { 00:16:58.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.063 "dma_device_type": 2 00:16:58.063 } 00:16:58.063 ], 00:16:58.063 "driver_specific": {} 00:16:58.063 } 00:16:58.063 ] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.063 "name": "Existed_Raid", 00:16:58.063 "uuid": "890645f8-dec4-4473-bd60-7b7b582e067e", 00:16:58.063 "strip_size_kb": 0, 00:16:58.063 "state": "online", 00:16:58.063 "raid_level": "raid1", 00:16:58.063 "superblock": true, 00:16:58.063 "num_base_bdevs": 2, 00:16:58.063 "num_base_bdevs_discovered": 2, 00:16:58.063 "num_base_bdevs_operational": 2, 00:16:58.063 "base_bdevs_list": [ 00:16:58.063 { 00:16:58.063 "name": "BaseBdev1", 00:16:58.063 "uuid": "d27bed3c-7d23-44d6-85a9-8ce823d5f300", 00:16:58.063 "is_configured": true, 00:16:58.063 "data_offset": 256, 00:16:58.063 "data_size": 7936 00:16:58.063 }, 00:16:58.063 { 00:16:58.063 "name": "BaseBdev2", 00:16:58.063 "uuid": "9cfaa18a-a337-4f69-95c8-15a3f1cb73da", 00:16:58.063 "is_configured": true, 00:16:58.063 "data_offset": 256, 00:16:58.063 "data_size": 7936 00:16:58.063 } 00:16:58.063 ] 00:16:58.063 }' 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.063 19:45:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 [2024-12-12 19:45:41.306610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.638 "name": "Existed_Raid", 00:16:58.638 "aliases": [ 00:16:58.638 "890645f8-dec4-4473-bd60-7b7b582e067e" 00:16:58.638 ], 00:16:58.638 "product_name": "Raid Volume", 00:16:58.638 "block_size": 4096, 00:16:58.638 "num_blocks": 7936, 00:16:58.638 "uuid": "890645f8-dec4-4473-bd60-7b7b582e067e", 00:16:58.638 "assigned_rate_limits": { 00:16:58.638 "rw_ios_per_sec": 0, 00:16:58.638 "rw_mbytes_per_sec": 0, 00:16:58.638 "r_mbytes_per_sec": 0, 00:16:58.638 "w_mbytes_per_sec": 0 00:16:58.638 }, 00:16:58.638 "claimed": false, 00:16:58.638 "zoned": false, 00:16:58.638 "supported_io_types": { 00:16:58.638 "read": true, 00:16:58.638 "write": true, 00:16:58.638 "unmap": false, 00:16:58.638 "flush": false, 00:16:58.638 "reset": true, 00:16:58.638 "nvme_admin": false, 00:16:58.638 "nvme_io": false, 00:16:58.638 "nvme_io_md": false, 00:16:58.638 "write_zeroes": true, 00:16:58.638 "zcopy": false, 00:16:58.638 "get_zone_info": false, 00:16:58.638 "zone_management": false, 00:16:58.638 "zone_append": false, 00:16:58.638 "compare": false, 00:16:58.638 "compare_and_write": false, 00:16:58.638 "abort": false, 00:16:58.638 "seek_hole": false, 00:16:58.638 "seek_data": false, 00:16:58.638 "copy": false, 00:16:58.638 "nvme_iov_md": false 00:16:58.638 }, 00:16:58.638 "memory_domains": [ 00:16:58.638 { 00:16:58.638 "dma_device_id": "system", 00:16:58.638 "dma_device_type": 1 00:16:58.638 }, 00:16:58.638 { 00:16:58.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.638 "dma_device_type": 2 00:16:58.638 }, 00:16:58.638 { 00:16:58.638 "dma_device_id": "system", 00:16:58.638 "dma_device_type": 1 00:16:58.638 }, 00:16:58.638 { 00:16:58.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.638 "dma_device_type": 2 00:16:58.638 } 00:16:58.638 ], 00:16:58.638 "driver_specific": { 00:16:58.638 "raid": { 00:16:58.638 "uuid": "890645f8-dec4-4473-bd60-7b7b582e067e", 00:16:58.638 "strip_size_kb": 0, 00:16:58.638 "state": "online", 00:16:58.638 "raid_level": "raid1", 00:16:58.638 "superblock": true, 00:16:58.638 "num_base_bdevs": 2, 00:16:58.638 "num_base_bdevs_discovered": 2, 00:16:58.638 "num_base_bdevs_operational": 2, 00:16:58.638 "base_bdevs_list": [ 00:16:58.638 { 00:16:58.638 "name": "BaseBdev1", 00:16:58.638 "uuid": "d27bed3c-7d23-44d6-85a9-8ce823d5f300", 00:16:58.638 "is_configured": true, 00:16:58.638 "data_offset": 256, 00:16:58.638 "data_size": 7936 00:16:58.638 }, 00:16:58.638 { 00:16:58.638 "name": "BaseBdev2", 00:16:58.638 "uuid": "9cfaa18a-a337-4f69-95c8-15a3f1cb73da", 00:16:58.638 "is_configured": true, 00:16:58.638 "data_offset": 256, 00:16:58.638 "data_size": 7936 00:16:58.638 } 00:16:58.638 ] 00:16:58.638 } 00:16:58.638 } 00:16:58.638 }' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:58.638 BaseBdev2' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.638 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.898 [2024-12-12 19:45:41.518159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.898 "name": "Existed_Raid", 00:16:58.898 "uuid": "890645f8-dec4-4473-bd60-7b7b582e067e", 00:16:58.898 "strip_size_kb": 0, 00:16:58.898 "state": "online", 00:16:58.898 "raid_level": "raid1", 00:16:58.898 "superblock": true, 00:16:58.898 "num_base_bdevs": 2, 00:16:58.898 "num_base_bdevs_discovered": 1, 00:16:58.898 "num_base_bdevs_operational": 1, 00:16:58.898 "base_bdevs_list": [ 00:16:58.898 { 00:16:58.898 "name": null, 00:16:58.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.898 "is_configured": false, 00:16:58.898 "data_offset": 0, 00:16:58.898 "data_size": 7936 00:16:58.898 }, 00:16:58.898 { 00:16:58.898 "name": "BaseBdev2", 00:16:58.898 "uuid": "9cfaa18a-a337-4f69-95c8-15a3f1cb73da", 00:16:58.898 "is_configured": true, 00:16:58.898 "data_offset": 256, 00:16:58.898 "data_size": 7936 00:16:58.898 } 00:16:58.898 ] 00:16:58.898 }' 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.898 19:45:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.467 [2024-12-12 19:45:42.079601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.467 [2024-12-12 19:45:42.079743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.467 [2024-12-12 19:45:42.170327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.467 [2024-12-12 19:45:42.170433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.467 [2024-12-12 19:45:42.170475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 87574 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87574 ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87574 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87574 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87574' 00:16:59.467 killing process with pid 87574 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87574 00:16:59.467 [2024-12-12 19:45:42.251943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.467 19:45:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87574 00:16:59.467 [2024-12-12 19:45:42.267379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.848 19:45:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.848 00:17:00.848 real 0m4.938s 00:17:00.848 user 0m7.141s 00:17:00.848 sys 0m0.848s 00:17:00.848 ************************************ 00:17:00.848 END TEST raid_state_function_test_sb_4k 00:17:00.848 ************************************ 00:17:00.848 19:45:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.848 19:45:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.848 19:45:43 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:00.848 19:45:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:00.848 19:45:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.848 19:45:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.848 ************************************ 00:17:00.848 START TEST raid_superblock_test_4k 00:17:00.848 ************************************ 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:00.848 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87821 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87821 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87821 ']' 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.849 19:45:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.849 [2024-12-12 19:45:43.480619] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:00.849 [2024-12-12 19:45:43.480771] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87821 ] 00:17:00.849 [2024-12-12 19:45:43.649683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.108 [2024-12-12 19:45:43.756644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.108 [2024-12-12 19:45:43.935874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.108 [2024-12-12 19:45:43.935903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 malloc1 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 [2024-12-12 19:45:44.349298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.677 [2024-12-12 19:45:44.349405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.677 [2024-12-12 19:45:44.349445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.677 [2024-12-12 19:45:44.349474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.677 [2024-12-12 19:45:44.351615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.677 [2024-12-12 19:45:44.351695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.677 pt1 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 malloc2 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 [2024-12-12 19:45:44.407766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.677 [2024-12-12 19:45:44.407866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.677 [2024-12-12 19:45:44.407904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.677 [2024-12-12 19:45:44.407930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.677 [2024-12-12 19:45:44.409898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.677 [2024-12-12 19:45:44.409965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.677 pt2 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 [2024-12-12 19:45:44.419791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.677 [2024-12-12 19:45:44.421463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.677 [2024-12-12 19:45:44.421643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.678 [2024-12-12 19:45:44.421660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.678 [2024-12-12 19:45:44.421882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.678 [2024-12-12 19:45:44.422032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.678 [2024-12-12 19:45:44.422046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.678 [2024-12-12 19:45:44.422192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.678 "name": "raid_bdev1", 00:17:01.678 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:01.678 "strip_size_kb": 0, 00:17:01.678 "state": "online", 00:17:01.678 "raid_level": "raid1", 00:17:01.678 "superblock": true, 00:17:01.678 "num_base_bdevs": 2, 00:17:01.678 "num_base_bdevs_discovered": 2, 00:17:01.678 "num_base_bdevs_operational": 2, 00:17:01.678 "base_bdevs_list": [ 00:17:01.678 { 00:17:01.678 "name": "pt1", 00:17:01.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.678 "is_configured": true, 00:17:01.678 "data_offset": 256, 00:17:01.678 "data_size": 7936 00:17:01.678 }, 00:17:01.678 { 00:17:01.678 "name": "pt2", 00:17:01.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.678 "is_configured": true, 00:17:01.678 "data_offset": 256, 00:17:01.678 "data_size": 7936 00:17:01.678 } 00:17:01.678 ] 00:17:01.678 }' 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.678 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 [2024-12-12 19:45:44.843319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.247 "name": "raid_bdev1", 00:17:02.247 "aliases": [ 00:17:02.247 "899051e7-79a6-4a23-b67d-f3209e04a2d7" 00:17:02.247 ], 00:17:02.247 "product_name": "Raid Volume", 00:17:02.247 "block_size": 4096, 00:17:02.247 "num_blocks": 7936, 00:17:02.247 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:02.247 "assigned_rate_limits": { 00:17:02.247 "rw_ios_per_sec": 0, 00:17:02.247 "rw_mbytes_per_sec": 0, 00:17:02.247 "r_mbytes_per_sec": 0, 00:17:02.247 "w_mbytes_per_sec": 0 00:17:02.247 }, 00:17:02.247 "claimed": false, 00:17:02.247 "zoned": false, 00:17:02.247 "supported_io_types": { 00:17:02.247 "read": true, 00:17:02.247 "write": true, 00:17:02.247 "unmap": false, 00:17:02.247 "flush": false, 00:17:02.247 "reset": true, 00:17:02.247 "nvme_admin": false, 00:17:02.247 "nvme_io": false, 00:17:02.247 "nvme_io_md": false, 00:17:02.247 "write_zeroes": true, 00:17:02.247 "zcopy": false, 00:17:02.247 "get_zone_info": false, 00:17:02.247 "zone_management": false, 00:17:02.247 "zone_append": false, 00:17:02.247 "compare": false, 00:17:02.247 "compare_and_write": false, 00:17:02.247 "abort": false, 00:17:02.247 "seek_hole": false, 00:17:02.247 "seek_data": false, 00:17:02.247 "copy": false, 00:17:02.247 "nvme_iov_md": false 00:17:02.247 }, 00:17:02.247 "memory_domains": [ 00:17:02.247 { 00:17:02.247 "dma_device_id": "system", 00:17:02.247 "dma_device_type": 1 00:17:02.247 }, 00:17:02.247 { 00:17:02.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.247 "dma_device_type": 2 00:17:02.247 }, 00:17:02.247 { 00:17:02.247 "dma_device_id": "system", 00:17:02.247 "dma_device_type": 1 00:17:02.247 }, 00:17:02.247 { 00:17:02.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.247 "dma_device_type": 2 00:17:02.247 } 00:17:02.247 ], 00:17:02.247 "driver_specific": { 00:17:02.247 "raid": { 00:17:02.247 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:02.247 "strip_size_kb": 0, 00:17:02.247 "state": "online", 00:17:02.247 "raid_level": "raid1", 00:17:02.247 "superblock": true, 00:17:02.247 "num_base_bdevs": 2, 00:17:02.247 "num_base_bdevs_discovered": 2, 00:17:02.247 "num_base_bdevs_operational": 2, 00:17:02.247 "base_bdevs_list": [ 00:17:02.247 { 00:17:02.247 "name": "pt1", 00:17:02.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.247 "is_configured": true, 00:17:02.247 "data_offset": 256, 00:17:02.247 "data_size": 7936 00:17:02.247 }, 00:17:02.247 { 00:17:02.247 "name": "pt2", 00:17:02.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.247 "is_configured": true, 00:17:02.247 "data_offset": 256, 00:17:02.247 "data_size": 7936 00:17:02.247 } 00:17:02.247 ] 00:17:02.247 } 00:17:02.247 } 00:17:02.247 }' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:02.247 pt2' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 19:45:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.247 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.247 [2024-12-12 19:45:45.074907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=899051e7-79a6-4a23-b67d-f3209e04a2d7 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 899051e7-79a6-4a23-b67d-f3209e04a2d7 ']' 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 [2024-12-12 19:45:45.118594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.506 [2024-12-12 19:45:45.118614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.506 [2024-12-12 19:45:45.118700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.506 [2024-12-12 19:45:45.118751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.506 [2024-12-12 19:45:45.118762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 [2024-12-12 19:45:45.250397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.506 [2024-12-12 19:45:45.252201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.506 [2024-12-12 19:45:45.252302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.506 [2024-12-12 19:45:45.252425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.506 [2024-12-12 19:45:45.252482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.506 [2024-12-12 19:45:45.252524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:02.506 request: 00:17:02.506 { 00:17:02.506 "name": "raid_bdev1", 00:17:02.506 "raid_level": "raid1", 00:17:02.506 "base_bdevs": [ 00:17:02.506 "malloc1", 00:17:02.506 "malloc2" 00:17:02.506 ], 00:17:02.506 "superblock": false, 00:17:02.506 "method": "bdev_raid_create", 00:17:02.506 "req_id": 1 00:17:02.506 } 00:17:02.506 Got JSON-RPC error response 00:17:02.506 response: 00:17:02.506 { 00:17:02.506 "code": -17, 00:17:02.506 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.506 } 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.506 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.507 [2024-12-12 19:45:45.318382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.507 [2024-12-12 19:45:45.318466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.507 [2024-12-12 19:45:45.318496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:02.507 [2024-12-12 19:45:45.318524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.507 [2024-12-12 19:45:45.320613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.507 [2024-12-12 19:45:45.320680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.507 [2024-12-12 19:45:45.320768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.507 [2024-12-12 19:45:45.320840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.507 pt1 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.507 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.765 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.765 "name": "raid_bdev1", 00:17:02.765 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:02.765 "strip_size_kb": 0, 00:17:02.765 "state": "configuring", 00:17:02.765 "raid_level": "raid1", 00:17:02.765 "superblock": true, 00:17:02.765 "num_base_bdevs": 2, 00:17:02.765 "num_base_bdevs_discovered": 1, 00:17:02.765 "num_base_bdevs_operational": 2, 00:17:02.765 "base_bdevs_list": [ 00:17:02.765 { 00:17:02.765 "name": "pt1", 00:17:02.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.765 "is_configured": true, 00:17:02.765 "data_offset": 256, 00:17:02.765 "data_size": 7936 00:17:02.765 }, 00:17:02.765 { 00:17:02.765 "name": null, 00:17:02.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.765 "is_configured": false, 00:17:02.765 "data_offset": 256, 00:17:02.765 "data_size": 7936 00:17:02.765 } 00:17:02.765 ] 00:17:02.765 }' 00:17:02.765 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.765 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 [2024-12-12 19:45:45.770385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.025 [2024-12-12 19:45:45.770439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.025 [2024-12-12 19:45:45.770456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:03.025 [2024-12-12 19:45:45.770465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.025 [2024-12-12 19:45:45.770845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.025 [2024-12-12 19:45:45.770879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.025 [2024-12-12 19:45:45.770937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.025 [2024-12-12 19:45:45.770959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.025 [2024-12-12 19:45:45.771094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.025 [2024-12-12 19:45:45.771110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.025 [2024-12-12 19:45:45.771327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.025 [2024-12-12 19:45:45.771479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.025 [2024-12-12 19:45:45.771486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:03.025 [2024-12-12 19:45:45.771634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.025 pt2 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.025 "name": "raid_bdev1", 00:17:03.025 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:03.025 "strip_size_kb": 0, 00:17:03.025 "state": "online", 00:17:03.025 "raid_level": "raid1", 00:17:03.025 "superblock": true, 00:17:03.025 "num_base_bdevs": 2, 00:17:03.025 "num_base_bdevs_discovered": 2, 00:17:03.025 "num_base_bdevs_operational": 2, 00:17:03.025 "base_bdevs_list": [ 00:17:03.025 { 00:17:03.025 "name": "pt1", 00:17:03.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.025 "is_configured": true, 00:17:03.025 "data_offset": 256, 00:17:03.025 "data_size": 7936 00:17:03.025 }, 00:17:03.025 { 00:17:03.025 "name": "pt2", 00:17:03.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.025 "is_configured": true, 00:17:03.025 "data_offset": 256, 00:17:03.025 "data_size": 7936 00:17:03.025 } 00:17:03.025 ] 00:17:03.025 }' 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.025 19:45:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.595 [2024-12-12 19:45:46.214317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.595 "name": "raid_bdev1", 00:17:03.595 "aliases": [ 00:17:03.595 "899051e7-79a6-4a23-b67d-f3209e04a2d7" 00:17:03.595 ], 00:17:03.595 "product_name": "Raid Volume", 00:17:03.595 "block_size": 4096, 00:17:03.595 "num_blocks": 7936, 00:17:03.595 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:03.595 "assigned_rate_limits": { 00:17:03.595 "rw_ios_per_sec": 0, 00:17:03.595 "rw_mbytes_per_sec": 0, 00:17:03.595 "r_mbytes_per_sec": 0, 00:17:03.595 "w_mbytes_per_sec": 0 00:17:03.595 }, 00:17:03.595 "claimed": false, 00:17:03.595 "zoned": false, 00:17:03.595 "supported_io_types": { 00:17:03.595 "read": true, 00:17:03.595 "write": true, 00:17:03.595 "unmap": false, 00:17:03.595 "flush": false, 00:17:03.595 "reset": true, 00:17:03.595 "nvme_admin": false, 00:17:03.595 "nvme_io": false, 00:17:03.595 "nvme_io_md": false, 00:17:03.595 "write_zeroes": true, 00:17:03.595 "zcopy": false, 00:17:03.595 "get_zone_info": false, 00:17:03.595 "zone_management": false, 00:17:03.595 "zone_append": false, 00:17:03.595 "compare": false, 00:17:03.595 "compare_and_write": false, 00:17:03.595 "abort": false, 00:17:03.595 "seek_hole": false, 00:17:03.595 "seek_data": false, 00:17:03.595 "copy": false, 00:17:03.595 "nvme_iov_md": false 00:17:03.595 }, 00:17:03.595 "memory_domains": [ 00:17:03.595 { 00:17:03.595 "dma_device_id": "system", 00:17:03.595 "dma_device_type": 1 00:17:03.595 }, 00:17:03.595 { 00:17:03.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.595 "dma_device_type": 2 00:17:03.595 }, 00:17:03.595 { 00:17:03.595 "dma_device_id": "system", 00:17:03.595 "dma_device_type": 1 00:17:03.595 }, 00:17:03.595 { 00:17:03.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.595 "dma_device_type": 2 00:17:03.595 } 00:17:03.595 ], 00:17:03.595 "driver_specific": { 00:17:03.595 "raid": { 00:17:03.595 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:03.595 "strip_size_kb": 0, 00:17:03.595 "state": "online", 00:17:03.595 "raid_level": "raid1", 00:17:03.595 "superblock": true, 00:17:03.595 "num_base_bdevs": 2, 00:17:03.595 "num_base_bdevs_discovered": 2, 00:17:03.595 "num_base_bdevs_operational": 2, 00:17:03.595 "base_bdevs_list": [ 00:17:03.595 { 00:17:03.595 "name": "pt1", 00:17:03.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.595 "is_configured": true, 00:17:03.595 "data_offset": 256, 00:17:03.595 "data_size": 7936 00:17:03.595 }, 00:17:03.595 { 00:17:03.595 "name": "pt2", 00:17:03.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.595 "is_configured": true, 00:17:03.595 "data_offset": 256, 00:17:03.595 "data_size": 7936 00:17:03.595 } 00:17:03.595 ] 00:17:03.595 } 00:17:03.595 } 00:17:03.595 }' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:03.595 pt2' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.595 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.855 [2024-12-12 19:45:46.445848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 899051e7-79a6-4a23-b67d-f3209e04a2d7 '!=' 899051e7-79a6-4a23-b67d-f3209e04a2d7 ']' 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.855 [2024-12-12 19:45:46.493624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.855 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.855 "name": "raid_bdev1", 00:17:03.855 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:03.855 "strip_size_kb": 0, 00:17:03.855 "state": "online", 00:17:03.855 "raid_level": "raid1", 00:17:03.855 "superblock": true, 00:17:03.855 "num_base_bdevs": 2, 00:17:03.855 "num_base_bdevs_discovered": 1, 00:17:03.855 "num_base_bdevs_operational": 1, 00:17:03.855 "base_bdevs_list": [ 00:17:03.855 { 00:17:03.855 "name": null, 00:17:03.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.855 "is_configured": false, 00:17:03.855 "data_offset": 0, 00:17:03.855 "data_size": 7936 00:17:03.855 }, 00:17:03.855 { 00:17:03.855 "name": "pt2", 00:17:03.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.855 "is_configured": true, 00:17:03.855 "data_offset": 256, 00:17:03.855 "data_size": 7936 00:17:03.855 } 00:17:03.856 ] 00:17:03.856 }' 00:17:03.856 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.856 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.115 [2024-12-12 19:45:46.940802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.115 [2024-12-12 19:45:46.940864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.115 [2024-12-12 19:45:46.940923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.115 [2024-12-12 19:45:46.940963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.115 [2024-12-12 19:45:46.940973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:04.115 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 19:45:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 [2024-12-12 19:45:47.012667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.375 [2024-12-12 19:45:47.012714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.375 [2024-12-12 19:45:47.012729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:04.375 [2024-12-12 19:45:47.012739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.375 [2024-12-12 19:45:47.014887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.375 [2024-12-12 19:45:47.014967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.375 [2024-12-12 19:45:47.015053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.375 [2024-12-12 19:45:47.015099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.375 [2024-12-12 19:45:47.015214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:04.375 [2024-12-12 19:45:47.015224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:04.375 [2024-12-12 19:45:47.015436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:04.375 [2024-12-12 19:45:47.015589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:04.375 [2024-12-12 19:45:47.015599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:04.375 [2024-12-12 19:45:47.015732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.375 pt2 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.375 "name": "raid_bdev1", 00:17:04.375 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:04.375 "strip_size_kb": 0, 00:17:04.375 "state": "online", 00:17:04.375 "raid_level": "raid1", 00:17:04.375 "superblock": true, 00:17:04.375 "num_base_bdevs": 2, 00:17:04.375 "num_base_bdevs_discovered": 1, 00:17:04.375 "num_base_bdevs_operational": 1, 00:17:04.375 "base_bdevs_list": [ 00:17:04.375 { 00:17:04.375 "name": null, 00:17:04.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.375 "is_configured": false, 00:17:04.375 "data_offset": 256, 00:17:04.375 "data_size": 7936 00:17:04.375 }, 00:17:04.375 { 00:17:04.375 "name": "pt2", 00:17:04.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.375 "is_configured": true, 00:17:04.375 "data_offset": 256, 00:17:04.375 "data_size": 7936 00:17:04.375 } 00:17:04.375 ] 00:17:04.375 }' 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.375 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.639 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.639 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.639 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.639 [2024-12-12 19:45:47.400024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.639 [2024-12-12 19:45:47.400091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.639 [2024-12-12 19:45:47.400171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.639 [2024-12-12 19:45:47.400232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.639 [2024-12-12 19:45:47.400291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:04.639 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.639 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.640 [2024-12-12 19:45:47.459935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:04.640 [2024-12-12 19:45:47.460026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.640 [2024-12-12 19:45:47.460065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:04.640 [2024-12-12 19:45:47.460095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.640 [2024-12-12 19:45:47.462118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.640 [2024-12-12 19:45:47.462188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:04.640 [2024-12-12 19:45:47.462301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:04.640 [2024-12-12 19:45:47.462374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.640 [2024-12-12 19:45:47.462577] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:04.640 [2024-12-12 19:45:47.462633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.640 [2024-12-12 19:45:47.462669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:04.640 [2024-12-12 19:45:47.462769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.640 [2024-12-12 19:45:47.462882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:04.640 [2024-12-12 19:45:47.462918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:04.640 [2024-12-12 19:45:47.463176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:04.640 [2024-12-12 19:45:47.463367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:04.640 [2024-12-12 19:45:47.463411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:04.640 [2024-12-12 19:45:47.463643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.640 pt1 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.640 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.911 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.911 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.911 "name": "raid_bdev1", 00:17:04.911 "uuid": "899051e7-79a6-4a23-b67d-f3209e04a2d7", 00:17:04.911 "strip_size_kb": 0, 00:17:04.911 "state": "online", 00:17:04.911 "raid_level": "raid1", 00:17:04.911 "superblock": true, 00:17:04.911 "num_base_bdevs": 2, 00:17:04.911 "num_base_bdevs_discovered": 1, 00:17:04.911 "num_base_bdevs_operational": 1, 00:17:04.911 "base_bdevs_list": [ 00:17:04.911 { 00:17:04.911 "name": null, 00:17:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.911 "is_configured": false, 00:17:04.911 "data_offset": 256, 00:17:04.911 "data_size": 7936 00:17:04.911 }, 00:17:04.911 { 00:17:04.911 "name": "pt2", 00:17:04.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.911 "is_configured": true, 00:17:04.911 "data_offset": 256, 00:17:04.911 "data_size": 7936 00:17:04.911 } 00:17:04.911 ] 00:17:04.911 }' 00:17:04.911 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.911 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.190 19:45:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 [2024-12-12 19:45:47.983206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 899051e7-79a6-4a23-b67d-f3209e04a2d7 '!=' 899051e7-79a6-4a23-b67d-f3209e04a2d7 ']' 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87821 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87821 ']' 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87821 00:17:05.190 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87821 00:17:05.474 killing process with pid 87821 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87821' 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87821 00:17:05.474 [2024-12-12 19:45:48.055581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.474 [2024-12-12 19:45:48.055648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.474 [2024-12-12 19:45:48.055687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.474 [2024-12-12 19:45:48.055699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:05.474 19:45:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87821 00:17:05.474 [2024-12-12 19:45:48.250490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.868 19:45:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:06.868 00:17:06.868 real 0m5.916s 00:17:06.868 user 0m8.936s 00:17:06.868 sys 0m1.082s 00:17:06.868 19:45:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.868 19:45:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.868 ************************************ 00:17:06.868 END TEST raid_superblock_test_4k 00:17:06.868 ************************************ 00:17:06.868 19:45:49 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:06.868 19:45:49 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:06.868 19:45:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.868 19:45:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.868 19:45:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.868 ************************************ 00:17:06.868 START TEST raid_rebuild_test_sb_4k 00:17:06.868 ************************************ 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=88144 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 88144 00:17:06.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 88144 ']' 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.868 19:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.868 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.868 Zero copy mechanism will not be used. 00:17:06.868 [2024-12-12 19:45:49.478736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:06.868 [2024-12-12 19:45:49.478850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88144 ] 00:17:06.868 [2024-12-12 19:45:49.649403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.128 [2024-12-12 19:45:49.757608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.128 [2024-12-12 19:45:49.941424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.128 [2024-12-12 19:45:49.941472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.698 BaseBdev1_malloc 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.698 [2024-12-12 19:45:50.315134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.698 [2024-12-12 19:45:50.315196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.698 [2024-12-12 19:45:50.315218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.698 [2024-12-12 19:45:50.315229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.698 [2024-12-12 19:45:50.317158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.698 [2024-12-12 19:45:50.317200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.698 BaseBdev1 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.698 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 BaseBdev2_malloc 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 [2024-12-12 19:45:50.368083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.699 [2024-12-12 19:45:50.368182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.699 [2024-12-12 19:45:50.368218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.699 [2024-12-12 19:45:50.368251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.699 [2024-12-12 19:45:50.370243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.699 [2024-12-12 19:45:50.370336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.699 BaseBdev2 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 spare_malloc 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 spare_delay 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 [2024-12-12 19:45:50.468233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.699 [2024-12-12 19:45:50.468337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.699 [2024-12-12 19:45:50.468374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:07.699 [2024-12-12 19:45:50.468403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.699 [2024-12-12 19:45:50.470380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.699 [2024-12-12 19:45:50.470457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.699 spare 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 [2024-12-12 19:45:50.480285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.699 [2024-12-12 19:45:50.482010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.699 [2024-12-12 19:45:50.482206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.699 [2024-12-12 19:45:50.482243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.699 [2024-12-12 19:45:50.482549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:07.699 [2024-12-12 19:45:50.482765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.699 [2024-12-12 19:45:50.482806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.699 [2024-12-12 19:45:50.483019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.699 "name": "raid_bdev1", 00:17:07.699 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:07.699 "strip_size_kb": 0, 00:17:07.699 "state": "online", 00:17:07.699 "raid_level": "raid1", 00:17:07.699 "superblock": true, 00:17:07.699 "num_base_bdevs": 2, 00:17:07.699 "num_base_bdevs_discovered": 2, 00:17:07.699 "num_base_bdevs_operational": 2, 00:17:07.699 "base_bdevs_list": [ 00:17:07.699 { 00:17:07.699 "name": "BaseBdev1", 00:17:07.699 "uuid": "c250c40f-9b40-5a3b-853d-2fa6614962c6", 00:17:07.699 "is_configured": true, 00:17:07.699 "data_offset": 256, 00:17:07.699 "data_size": 7936 00:17:07.699 }, 00:17:07.699 { 00:17:07.699 "name": "BaseBdev2", 00:17:07.699 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:07.699 "is_configured": true, 00:17:07.699 "data_offset": 256, 00:17:07.699 "data_size": 7936 00:17:07.699 } 00:17:07.699 ] 00:17:07.699 }' 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.699 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.268 [2024-12-12 19:45:50.911812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:08.268 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.269 19:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:08.528 [2024-12-12 19:45:51.167123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:08.528 /dev/nbd0 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:08.528 1+0 records in 00:17:08.528 1+0 records out 00:17:08.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221677 s, 18.5 MB/s 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:08.528 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:09.096 7936+0 records in 00:17:09.096 7936+0 records out 00:17:09.096 32505856 bytes (33 MB, 31 MiB) copied, 0.607569 s, 53.5 MB/s 00:17:09.096 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:09.096 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.096 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:09.097 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:09.097 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:09.097 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.097 19:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:09.356 [2024-12-12 19:45:52.055655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.356 [2024-12-12 19:45:52.071722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.356 "name": "raid_bdev1", 00:17:09.356 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:09.356 "strip_size_kb": 0, 00:17:09.356 "state": "online", 00:17:09.356 "raid_level": "raid1", 00:17:09.356 "superblock": true, 00:17:09.356 "num_base_bdevs": 2, 00:17:09.356 "num_base_bdevs_discovered": 1, 00:17:09.356 "num_base_bdevs_operational": 1, 00:17:09.356 "base_bdevs_list": [ 00:17:09.356 { 00:17:09.356 "name": null, 00:17:09.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.356 "is_configured": false, 00:17:09.356 "data_offset": 0, 00:17:09.356 "data_size": 7936 00:17:09.356 }, 00:17:09.356 { 00:17:09.356 "name": "BaseBdev2", 00:17:09.356 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:09.356 "is_configured": true, 00:17:09.356 "data_offset": 256, 00:17:09.356 "data_size": 7936 00:17:09.356 } 00:17:09.356 ] 00:17:09.356 }' 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.356 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.926 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.926 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.926 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.926 [2024-12-12 19:45:52.486997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.926 [2024-12-12 19:45:52.503862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:09.926 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.926 19:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.926 [2024-12-12 19:45:52.505600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.865 "name": "raid_bdev1", 00:17:10.865 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:10.865 "strip_size_kb": 0, 00:17:10.865 "state": "online", 00:17:10.865 "raid_level": "raid1", 00:17:10.865 "superblock": true, 00:17:10.865 "num_base_bdevs": 2, 00:17:10.865 "num_base_bdevs_discovered": 2, 00:17:10.865 "num_base_bdevs_operational": 2, 00:17:10.865 "process": { 00:17:10.865 "type": "rebuild", 00:17:10.865 "target": "spare", 00:17:10.865 "progress": { 00:17:10.865 "blocks": 2560, 00:17:10.865 "percent": 32 00:17:10.865 } 00:17:10.865 }, 00:17:10.865 "base_bdevs_list": [ 00:17:10.865 { 00:17:10.865 "name": "spare", 00:17:10.865 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:10.865 "is_configured": true, 00:17:10.865 "data_offset": 256, 00:17:10.865 "data_size": 7936 00:17:10.865 }, 00:17:10.865 { 00:17:10.865 "name": "BaseBdev2", 00:17:10.865 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:10.865 "is_configured": true, 00:17:10.865 "data_offset": 256, 00:17:10.865 "data_size": 7936 00:17:10.865 } 00:17:10.865 ] 00:17:10.865 }' 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.865 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 [2024-12-12 19:45:53.644918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.125 [2024-12-12 19:45:53.710423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:11.125 [2024-12-12 19:45:53.710532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.125 [2024-12-12 19:45:53.710580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.125 [2024-12-12 19:45:53.710604] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.125 "name": "raid_bdev1", 00:17:11.125 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:11.125 "strip_size_kb": 0, 00:17:11.125 "state": "online", 00:17:11.125 "raid_level": "raid1", 00:17:11.125 "superblock": true, 00:17:11.125 "num_base_bdevs": 2, 00:17:11.125 "num_base_bdevs_discovered": 1, 00:17:11.125 "num_base_bdevs_operational": 1, 00:17:11.125 "base_bdevs_list": [ 00:17:11.125 { 00:17:11.125 "name": null, 00:17:11.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.125 "is_configured": false, 00:17:11.125 "data_offset": 0, 00:17:11.125 "data_size": 7936 00:17:11.125 }, 00:17:11.125 { 00:17:11.125 "name": "BaseBdev2", 00:17:11.125 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:11.125 "is_configured": true, 00:17:11.125 "data_offset": 256, 00:17:11.125 "data_size": 7936 00:17:11.125 } 00:17:11.125 ] 00:17:11.125 }' 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.125 19:45:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.384 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.384 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.385 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.644 "name": "raid_bdev1", 00:17:11.644 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:11.644 "strip_size_kb": 0, 00:17:11.644 "state": "online", 00:17:11.644 "raid_level": "raid1", 00:17:11.644 "superblock": true, 00:17:11.644 "num_base_bdevs": 2, 00:17:11.644 "num_base_bdevs_discovered": 1, 00:17:11.644 "num_base_bdevs_operational": 1, 00:17:11.644 "base_bdevs_list": [ 00:17:11.644 { 00:17:11.644 "name": null, 00:17:11.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.644 "is_configured": false, 00:17:11.644 "data_offset": 0, 00:17:11.644 "data_size": 7936 00:17:11.644 }, 00:17:11.644 { 00:17:11.644 "name": "BaseBdev2", 00:17:11.644 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:11.644 "is_configured": true, 00:17:11.644 "data_offset": 256, 00:17:11.644 "data_size": 7936 00:17:11.644 } 00:17:11.644 ] 00:17:11.644 }' 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.644 [2024-12-12 19:45:54.335306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.644 [2024-12-12 19:45:54.350455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.644 19:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.644 [2024-12-12 19:45:54.352268] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.583 "name": "raid_bdev1", 00:17:12.583 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:12.583 "strip_size_kb": 0, 00:17:12.583 "state": "online", 00:17:12.583 "raid_level": "raid1", 00:17:12.583 "superblock": true, 00:17:12.583 "num_base_bdevs": 2, 00:17:12.583 "num_base_bdevs_discovered": 2, 00:17:12.583 "num_base_bdevs_operational": 2, 00:17:12.583 "process": { 00:17:12.583 "type": "rebuild", 00:17:12.583 "target": "spare", 00:17:12.583 "progress": { 00:17:12.583 "blocks": 2560, 00:17:12.583 "percent": 32 00:17:12.583 } 00:17:12.583 }, 00:17:12.583 "base_bdevs_list": [ 00:17:12.583 { 00:17:12.583 "name": "spare", 00:17:12.583 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:12.583 "is_configured": true, 00:17:12.583 "data_offset": 256, 00:17:12.583 "data_size": 7936 00:17:12.583 }, 00:17:12.583 { 00:17:12.583 "name": "BaseBdev2", 00:17:12.583 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:12.583 "is_configured": true, 00:17:12.583 "data_offset": 256, 00:17:12.583 "data_size": 7936 00:17:12.583 } 00:17:12.583 ] 00:17:12.583 }' 00:17:12.583 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.842 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:12.842 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=671 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.843 "name": "raid_bdev1", 00:17:12.843 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:12.843 "strip_size_kb": 0, 00:17:12.843 "state": "online", 00:17:12.843 "raid_level": "raid1", 00:17:12.843 "superblock": true, 00:17:12.843 "num_base_bdevs": 2, 00:17:12.843 "num_base_bdevs_discovered": 2, 00:17:12.843 "num_base_bdevs_operational": 2, 00:17:12.843 "process": { 00:17:12.843 "type": "rebuild", 00:17:12.843 "target": "spare", 00:17:12.843 "progress": { 00:17:12.843 "blocks": 2816, 00:17:12.843 "percent": 35 00:17:12.843 } 00:17:12.843 }, 00:17:12.843 "base_bdevs_list": [ 00:17:12.843 { 00:17:12.843 "name": "spare", 00:17:12.843 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:12.843 "is_configured": true, 00:17:12.843 "data_offset": 256, 00:17:12.843 "data_size": 7936 00:17:12.843 }, 00:17:12.843 { 00:17:12.843 "name": "BaseBdev2", 00:17:12.843 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:12.843 "is_configured": true, 00:17:12.843 "data_offset": 256, 00:17:12.843 "data_size": 7936 00:17:12.843 } 00:17:12.843 ] 00:17:12.843 }' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.843 19:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.224 "name": "raid_bdev1", 00:17:14.224 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:14.224 "strip_size_kb": 0, 00:17:14.224 "state": "online", 00:17:14.224 "raid_level": "raid1", 00:17:14.224 "superblock": true, 00:17:14.224 "num_base_bdevs": 2, 00:17:14.224 "num_base_bdevs_discovered": 2, 00:17:14.224 "num_base_bdevs_operational": 2, 00:17:14.224 "process": { 00:17:14.224 "type": "rebuild", 00:17:14.224 "target": "spare", 00:17:14.224 "progress": { 00:17:14.224 "blocks": 5888, 00:17:14.224 "percent": 74 00:17:14.224 } 00:17:14.224 }, 00:17:14.224 "base_bdevs_list": [ 00:17:14.224 { 00:17:14.224 "name": "spare", 00:17:14.224 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:14.224 "is_configured": true, 00:17:14.224 "data_offset": 256, 00:17:14.224 "data_size": 7936 00:17:14.224 }, 00:17:14.224 { 00:17:14.224 "name": "BaseBdev2", 00:17:14.224 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:14.224 "is_configured": true, 00:17:14.224 "data_offset": 256, 00:17:14.224 "data_size": 7936 00:17:14.224 } 00:17:14.224 ] 00:17:14.224 }' 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.224 19:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.794 [2024-12-12 19:45:57.464365] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:14.794 [2024-12-12 19:45:57.464431] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:14.794 [2024-12-12 19:45:57.464539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.054 "name": "raid_bdev1", 00:17:15.054 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:15.054 "strip_size_kb": 0, 00:17:15.054 "state": "online", 00:17:15.054 "raid_level": "raid1", 00:17:15.054 "superblock": true, 00:17:15.054 "num_base_bdevs": 2, 00:17:15.054 "num_base_bdevs_discovered": 2, 00:17:15.054 "num_base_bdevs_operational": 2, 00:17:15.054 "base_bdevs_list": [ 00:17:15.054 { 00:17:15.054 "name": "spare", 00:17:15.054 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 256, 00:17:15.054 "data_size": 7936 00:17:15.054 }, 00:17:15.054 { 00:17:15.054 "name": "BaseBdev2", 00:17:15.054 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 256, 00:17:15.054 "data_size": 7936 00:17:15.054 } 00:17:15.054 ] 00:17:15.054 }' 00:17:15.054 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.314 19:45:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.314 "name": "raid_bdev1", 00:17:15.314 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:15.314 "strip_size_kb": 0, 00:17:15.314 "state": "online", 00:17:15.314 "raid_level": "raid1", 00:17:15.314 "superblock": true, 00:17:15.314 "num_base_bdevs": 2, 00:17:15.314 "num_base_bdevs_discovered": 2, 00:17:15.314 "num_base_bdevs_operational": 2, 00:17:15.314 "base_bdevs_list": [ 00:17:15.314 { 00:17:15.314 "name": "spare", 00:17:15.314 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:15.314 "is_configured": true, 00:17:15.314 "data_offset": 256, 00:17:15.314 "data_size": 7936 00:17:15.314 }, 00:17:15.314 { 00:17:15.314 "name": "BaseBdev2", 00:17:15.314 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:15.314 "is_configured": true, 00:17:15.314 "data_offset": 256, 00:17:15.314 "data_size": 7936 00:17:15.314 } 00:17:15.314 ] 00:17:15.314 }' 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.314 "name": "raid_bdev1", 00:17:15.314 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:15.314 "strip_size_kb": 0, 00:17:15.314 "state": "online", 00:17:15.314 "raid_level": "raid1", 00:17:15.314 "superblock": true, 00:17:15.314 "num_base_bdevs": 2, 00:17:15.314 "num_base_bdevs_discovered": 2, 00:17:15.314 "num_base_bdevs_operational": 2, 00:17:15.314 "base_bdevs_list": [ 00:17:15.314 { 00:17:15.314 "name": "spare", 00:17:15.314 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:15.314 "is_configured": true, 00:17:15.314 "data_offset": 256, 00:17:15.314 "data_size": 7936 00:17:15.314 }, 00:17:15.314 { 00:17:15.314 "name": "BaseBdev2", 00:17:15.314 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:15.314 "is_configured": true, 00:17:15.314 "data_offset": 256, 00:17:15.314 "data_size": 7936 00:17:15.314 } 00:17:15.314 ] 00:17:15.314 }' 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.314 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 [2024-12-12 19:45:58.523403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.884 [2024-12-12 19:45:58.523470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.884 [2024-12-12 19:45:58.523576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.884 [2024-12-12 19:45:58.523692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.884 [2024-12-12 19:45:58.523749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.884 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:16.144 /dev/nbd0 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.144 1+0 records in 00:17:16.144 1+0 records out 00:17:16.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364254 s, 11.2 MB/s 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.144 19:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:16.404 /dev/nbd1 00:17:16.404 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:16.404 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.405 1+0 records in 00:17:16.405 1+0 records out 00:17:16.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397112 s, 10.3 MB/s 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.405 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.665 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:16.924 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:16.924 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:16.924 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:16.924 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.925 [2024-12-12 19:45:59.662102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.925 [2024-12-12 19:45:59.662195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.925 [2024-12-12 19:45:59.662221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:16.925 [2024-12-12 19:45:59.662230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.925 [2024-12-12 19:45:59.664418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.925 [2024-12-12 19:45:59.664458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.925 [2024-12-12 19:45:59.664553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.925 [2024-12-12 19:45:59.664601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.925 [2024-12-12 19:45:59.664766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.925 spare 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.925 [2024-12-12 19:45:59.764672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:16.925 [2024-12-12 19:45:59.764737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:16.925 [2024-12-12 19:45:59.765046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:16.925 [2024-12-12 19:45:59.765270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:16.925 [2024-12-12 19:45:59.765327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:16.925 [2024-12-12 19:45:59.765576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.925 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.184 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.185 "name": "raid_bdev1", 00:17:17.185 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:17.185 "strip_size_kb": 0, 00:17:17.185 "state": "online", 00:17:17.185 "raid_level": "raid1", 00:17:17.185 "superblock": true, 00:17:17.185 "num_base_bdevs": 2, 00:17:17.185 "num_base_bdevs_discovered": 2, 00:17:17.185 "num_base_bdevs_operational": 2, 00:17:17.185 "base_bdevs_list": [ 00:17:17.185 { 00:17:17.185 "name": "spare", 00:17:17.185 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:17.185 "is_configured": true, 00:17:17.185 "data_offset": 256, 00:17:17.185 "data_size": 7936 00:17:17.185 }, 00:17:17.185 { 00:17:17.185 "name": "BaseBdev2", 00:17:17.185 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:17.185 "is_configured": true, 00:17:17.185 "data_offset": 256, 00:17:17.185 "data_size": 7936 00:17:17.185 } 00:17:17.185 ] 00:17:17.185 }' 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.185 19:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.444 "name": "raid_bdev1", 00:17:17.444 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:17.444 "strip_size_kb": 0, 00:17:17.444 "state": "online", 00:17:17.444 "raid_level": "raid1", 00:17:17.444 "superblock": true, 00:17:17.444 "num_base_bdevs": 2, 00:17:17.444 "num_base_bdevs_discovered": 2, 00:17:17.444 "num_base_bdevs_operational": 2, 00:17:17.444 "base_bdevs_list": [ 00:17:17.444 { 00:17:17.444 "name": "spare", 00:17:17.444 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:17.444 "is_configured": true, 00:17:17.444 "data_offset": 256, 00:17:17.444 "data_size": 7936 00:17:17.444 }, 00:17:17.444 { 00:17:17.444 "name": "BaseBdev2", 00:17:17.444 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:17.444 "is_configured": true, 00:17:17.444 "data_offset": 256, 00:17:17.444 "data_size": 7936 00:17:17.444 } 00:17:17.444 ] 00:17:17.444 }' 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.444 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.704 [2024-12-12 19:46:00.360966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.704 "name": "raid_bdev1", 00:17:17.704 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:17.704 "strip_size_kb": 0, 00:17:17.704 "state": "online", 00:17:17.704 "raid_level": "raid1", 00:17:17.704 "superblock": true, 00:17:17.704 "num_base_bdevs": 2, 00:17:17.704 "num_base_bdevs_discovered": 1, 00:17:17.704 "num_base_bdevs_operational": 1, 00:17:17.704 "base_bdevs_list": [ 00:17:17.704 { 00:17:17.704 "name": null, 00:17:17.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.704 "is_configured": false, 00:17:17.704 "data_offset": 0, 00:17:17.704 "data_size": 7936 00:17:17.704 }, 00:17:17.704 { 00:17:17.704 "name": "BaseBdev2", 00:17:17.704 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:17.704 "is_configured": true, 00:17:17.704 "data_offset": 256, 00:17:17.704 "data_size": 7936 00:17:17.704 } 00:17:17.704 ] 00:17:17.704 }' 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.704 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.965 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.965 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.965 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.965 [2024-12-12 19:46:00.752298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.965 [2024-12-12 19:46:00.752514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.965 [2024-12-12 19:46:00.752588] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:17.965 [2024-12-12 19:46:00.752655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.965 [2024-12-12 19:46:00.768072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:17.965 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.965 19:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:17.965 [2024-12-12 19:46:00.769890] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.368 "name": "raid_bdev1", 00:17:19.368 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:19.368 "strip_size_kb": 0, 00:17:19.368 "state": "online", 00:17:19.368 "raid_level": "raid1", 00:17:19.368 "superblock": true, 00:17:19.368 "num_base_bdevs": 2, 00:17:19.368 "num_base_bdevs_discovered": 2, 00:17:19.368 "num_base_bdevs_operational": 2, 00:17:19.368 "process": { 00:17:19.368 "type": "rebuild", 00:17:19.368 "target": "spare", 00:17:19.368 "progress": { 00:17:19.368 "blocks": 2560, 00:17:19.368 "percent": 32 00:17:19.368 } 00:17:19.368 }, 00:17:19.368 "base_bdevs_list": [ 00:17:19.368 { 00:17:19.368 "name": "spare", 00:17:19.368 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:19.368 "is_configured": true, 00:17:19.368 "data_offset": 256, 00:17:19.368 "data_size": 7936 00:17:19.368 }, 00:17:19.368 { 00:17:19.368 "name": "BaseBdev2", 00:17:19.368 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:19.368 "is_configured": true, 00:17:19.368 "data_offset": 256, 00:17:19.368 "data_size": 7936 00:17:19.368 } 00:17:19.368 ] 00:17:19.368 }' 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.368 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.369 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.369 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.369 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.369 19:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.369 [2024-12-12 19:46:01.934450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.369 [2024-12-12 19:46:01.974755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:19.369 [2024-12-12 19:46:01.974828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.369 [2024-12-12 19:46:01.974842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.369 [2024-12-12 19:46:01.974851] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.369 "name": "raid_bdev1", 00:17:19.369 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:19.369 "strip_size_kb": 0, 00:17:19.369 "state": "online", 00:17:19.369 "raid_level": "raid1", 00:17:19.369 "superblock": true, 00:17:19.369 "num_base_bdevs": 2, 00:17:19.369 "num_base_bdevs_discovered": 1, 00:17:19.369 "num_base_bdevs_operational": 1, 00:17:19.369 "base_bdevs_list": [ 00:17:19.369 { 00:17:19.369 "name": null, 00:17:19.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.369 "is_configured": false, 00:17:19.369 "data_offset": 0, 00:17:19.369 "data_size": 7936 00:17:19.369 }, 00:17:19.369 { 00:17:19.369 "name": "BaseBdev2", 00:17:19.369 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:19.369 "is_configured": true, 00:17:19.369 "data_offset": 256, 00:17:19.369 "data_size": 7936 00:17:19.369 } 00:17:19.369 ] 00:17:19.369 }' 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.369 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.628 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.628 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.628 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.628 [2024-12-12 19:46:02.466639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.628 [2024-12-12 19:46:02.466738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.628 [2024-12-12 19:46:02.466776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:19.628 [2024-12-12 19:46:02.466806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.628 [2024-12-12 19:46:02.467315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.628 [2024-12-12 19:46:02.467378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.628 [2024-12-12 19:46:02.467512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:19.628 [2024-12-12 19:46:02.467571] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.628 [2024-12-12 19:46:02.467643] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.628 [2024-12-12 19:46:02.467722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.888 [2024-12-12 19:46:02.483449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:19.888 spare 00:17:19.888 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.888 [2024-12-12 19:46:02.485285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.888 19:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.825 "name": "raid_bdev1", 00:17:20.825 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:20.825 "strip_size_kb": 0, 00:17:20.825 "state": "online", 00:17:20.825 "raid_level": "raid1", 00:17:20.825 "superblock": true, 00:17:20.825 "num_base_bdevs": 2, 00:17:20.825 "num_base_bdevs_discovered": 2, 00:17:20.825 "num_base_bdevs_operational": 2, 00:17:20.825 "process": { 00:17:20.825 "type": "rebuild", 00:17:20.825 "target": "spare", 00:17:20.825 "progress": { 00:17:20.825 "blocks": 2560, 00:17:20.825 "percent": 32 00:17:20.825 } 00:17:20.825 }, 00:17:20.825 "base_bdevs_list": [ 00:17:20.825 { 00:17:20.825 "name": "spare", 00:17:20.825 "uuid": "d3649c3e-c87d-5dee-9802-dd8e4d5a9d6c", 00:17:20.825 "is_configured": true, 00:17:20.825 "data_offset": 256, 00:17:20.825 "data_size": 7936 00:17:20.825 }, 00:17:20.825 { 00:17:20.825 "name": "BaseBdev2", 00:17:20.825 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:20.825 "is_configured": true, 00:17:20.825 "data_offset": 256, 00:17:20.825 "data_size": 7936 00:17:20.825 } 00:17:20.825 ] 00:17:20.825 }' 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.825 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.825 [2024-12-12 19:46:03.621753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.084 [2024-12-12 19:46:03.690093] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.084 [2024-12-12 19:46:03.690145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.084 [2024-12-12 19:46:03.690163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.084 [2024-12-12 19:46:03.690170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.084 "name": "raid_bdev1", 00:17:21.084 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:21.084 "strip_size_kb": 0, 00:17:21.084 "state": "online", 00:17:21.084 "raid_level": "raid1", 00:17:21.084 "superblock": true, 00:17:21.084 "num_base_bdevs": 2, 00:17:21.084 "num_base_bdevs_discovered": 1, 00:17:21.084 "num_base_bdevs_operational": 1, 00:17:21.084 "base_bdevs_list": [ 00:17:21.084 { 00:17:21.084 "name": null, 00:17:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.084 "is_configured": false, 00:17:21.084 "data_offset": 0, 00:17:21.084 "data_size": 7936 00:17:21.084 }, 00:17:21.084 { 00:17:21.084 "name": "BaseBdev2", 00:17:21.084 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:21.084 "is_configured": true, 00:17:21.084 "data_offset": 256, 00:17:21.084 "data_size": 7936 00:17:21.084 } 00:17:21.084 ] 00:17:21.084 }' 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.084 19:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.653 "name": "raid_bdev1", 00:17:21.653 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:21.653 "strip_size_kb": 0, 00:17:21.653 "state": "online", 00:17:21.653 "raid_level": "raid1", 00:17:21.653 "superblock": true, 00:17:21.653 "num_base_bdevs": 2, 00:17:21.653 "num_base_bdevs_discovered": 1, 00:17:21.653 "num_base_bdevs_operational": 1, 00:17:21.653 "base_bdevs_list": [ 00:17:21.653 { 00:17:21.653 "name": null, 00:17:21.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.653 "is_configured": false, 00:17:21.653 "data_offset": 0, 00:17:21.653 "data_size": 7936 00:17:21.653 }, 00:17:21.653 { 00:17:21.653 "name": "BaseBdev2", 00:17:21.653 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:21.653 "is_configured": true, 00:17:21.653 "data_offset": 256, 00:17:21.653 "data_size": 7936 00:17:21.653 } 00:17:21.653 ] 00:17:21.653 }' 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 [2024-12-12 19:46:04.343398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:21.653 [2024-12-12 19:46:04.343494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.653 [2024-12-12 19:46:04.343531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:21.653 [2024-12-12 19:46:04.343549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.653 [2024-12-12 19:46:04.343986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.653 [2024-12-12 19:46:04.344004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:21.653 [2024-12-12 19:46:04.344077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:21.653 [2024-12-12 19:46:04.344088] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.653 [2024-12-12 19:46:04.344099] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:21.653 [2024-12-12 19:46:04.344107] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:21.653 BaseBdev1 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.653 19:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.594 "name": "raid_bdev1", 00:17:22.594 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:22.594 "strip_size_kb": 0, 00:17:22.594 "state": "online", 00:17:22.594 "raid_level": "raid1", 00:17:22.594 "superblock": true, 00:17:22.594 "num_base_bdevs": 2, 00:17:22.594 "num_base_bdevs_discovered": 1, 00:17:22.594 "num_base_bdevs_operational": 1, 00:17:22.594 "base_bdevs_list": [ 00:17:22.594 { 00:17:22.594 "name": null, 00:17:22.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.594 "is_configured": false, 00:17:22.594 "data_offset": 0, 00:17:22.594 "data_size": 7936 00:17:22.594 }, 00:17:22.594 { 00:17:22.594 "name": "BaseBdev2", 00:17:22.594 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:22.594 "is_configured": true, 00:17:22.594 "data_offset": 256, 00:17:22.594 "data_size": 7936 00:17:22.594 } 00:17:22.594 ] 00:17:22.594 }' 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.594 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.163 "name": "raid_bdev1", 00:17:23.163 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:23.163 "strip_size_kb": 0, 00:17:23.163 "state": "online", 00:17:23.163 "raid_level": "raid1", 00:17:23.163 "superblock": true, 00:17:23.163 "num_base_bdevs": 2, 00:17:23.163 "num_base_bdevs_discovered": 1, 00:17:23.163 "num_base_bdevs_operational": 1, 00:17:23.163 "base_bdevs_list": [ 00:17:23.163 { 00:17:23.163 "name": null, 00:17:23.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.163 "is_configured": false, 00:17:23.163 "data_offset": 0, 00:17:23.163 "data_size": 7936 00:17:23.163 }, 00:17:23.163 { 00:17:23.163 "name": "BaseBdev2", 00:17:23.163 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:23.163 "is_configured": true, 00:17:23.163 "data_offset": 256, 00:17:23.163 "data_size": 7936 00:17:23.163 } 00:17:23.163 ] 00:17:23.163 }' 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.163 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.164 [2024-12-12 19:46:05.960976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.164 [2024-12-12 19:46:05.961181] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.164 [2024-12-12 19:46:05.961240] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.164 request: 00:17:23.164 { 00:17:23.164 "base_bdev": "BaseBdev1", 00:17:23.164 "raid_bdev": "raid_bdev1", 00:17:23.164 "method": "bdev_raid_add_base_bdev", 00:17:23.164 "req_id": 1 00:17:23.164 } 00:17:23.164 Got JSON-RPC error response 00:17:23.164 response: 00:17:23.164 { 00:17:23.164 "code": -22, 00:17:23.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:23.164 } 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.164 19:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.544 19:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.544 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.544 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.544 "name": "raid_bdev1", 00:17:24.544 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:24.544 "strip_size_kb": 0, 00:17:24.544 "state": "online", 00:17:24.544 "raid_level": "raid1", 00:17:24.544 "superblock": true, 00:17:24.544 "num_base_bdevs": 2, 00:17:24.544 "num_base_bdevs_discovered": 1, 00:17:24.544 "num_base_bdevs_operational": 1, 00:17:24.544 "base_bdevs_list": [ 00:17:24.544 { 00:17:24.544 "name": null, 00:17:24.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.544 "is_configured": false, 00:17:24.544 "data_offset": 0, 00:17:24.544 "data_size": 7936 00:17:24.544 }, 00:17:24.544 { 00:17:24.544 "name": "BaseBdev2", 00:17:24.544 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:24.544 "is_configured": true, 00:17:24.544 "data_offset": 256, 00:17:24.544 "data_size": 7936 00:17:24.544 } 00:17:24.544 ] 00:17:24.544 }' 00:17:24.544 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.544 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.804 "name": "raid_bdev1", 00:17:24.804 "uuid": "614b8559-3fec-433e-aba2-39af35e79070", 00:17:24.804 "strip_size_kb": 0, 00:17:24.804 "state": "online", 00:17:24.804 "raid_level": "raid1", 00:17:24.804 "superblock": true, 00:17:24.804 "num_base_bdevs": 2, 00:17:24.804 "num_base_bdevs_discovered": 1, 00:17:24.804 "num_base_bdevs_operational": 1, 00:17:24.804 "base_bdevs_list": [ 00:17:24.804 { 00:17:24.804 "name": null, 00:17:24.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.804 "is_configured": false, 00:17:24.804 "data_offset": 0, 00:17:24.804 "data_size": 7936 00:17:24.804 }, 00:17:24.804 { 00:17:24.804 "name": "BaseBdev2", 00:17:24.804 "uuid": "1e9f1234-8b32-5e9c-be84-c7c1cb78a40c", 00:17:24.804 "is_configured": true, 00:17:24.804 "data_offset": 256, 00:17:24.804 "data_size": 7936 00:17:24.804 } 00:17:24.804 ] 00:17:24.804 }' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 88144 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 88144 ']' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 88144 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88144 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.804 killing process with pid 88144 00:17:24.804 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.804 00:17:24.804 Latency(us) 00:17:24.804 [2024-12-12T19:46:07.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.804 [2024-12-12T19:46:07.649Z] =================================================================================================================== 00:17:24.804 [2024-12-12T19:46:07.649Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88144' 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 88144 00:17:24.804 [2024-12-12 19:46:07.633320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.804 [2024-12-12 19:46:07.633438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.804 19:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 88144 00:17:24.804 [2024-12-12 19:46:07.633490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.804 [2024-12-12 19:46:07.633501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:25.377 [2024-12-12 19:46:07.925850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.336 ************************************ 00:17:26.336 END TEST raid_rebuild_test_sb_4k 00:17:26.336 ************************************ 00:17:26.336 19:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:26.336 00:17:26.336 real 0m19.601s 00:17:26.336 user 0m25.520s 00:17:26.336 sys 0m2.571s 00:17:26.336 19:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.336 19:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.336 19:46:09 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:26.336 19:46:09 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:26.336 19:46:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:26.336 19:46:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.336 19:46:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.336 ************************************ 00:17:26.337 START TEST raid_state_function_test_sb_md_separate 00:17:26.337 ************************************ 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88835 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:26.337 Process raid pid: 88835 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88835' 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88835 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88835 ']' 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.337 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 [2024-12-12 19:46:09.145599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:26.337 [2024-12-12 19:46:09.145776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.596 [2024-12-12 19:46:09.320643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.596 [2024-12-12 19:46:09.434864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.855 [2024-12-12 19:46:09.631570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.855 [2024-12-12 19:46:09.631610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.115 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.115 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:27.115 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:27.115 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.115 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.115 [2024-12-12 19:46:09.953948] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.115 [2024-12-12 19:46:09.953999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.115 [2024-12-12 19:46:09.954010] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.115 [2024-12-12 19:46:09.954019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 19:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.375 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.375 "name": "Existed_Raid", 00:17:27.375 "uuid": "ef527943-2e74-48d0-be5d-c709c7da61b3", 00:17:27.375 "strip_size_kb": 0, 00:17:27.375 "state": "configuring", 00:17:27.375 "raid_level": "raid1", 00:17:27.375 "superblock": true, 00:17:27.375 "num_base_bdevs": 2, 00:17:27.375 "num_base_bdevs_discovered": 0, 00:17:27.375 "num_base_bdevs_operational": 2, 00:17:27.375 "base_bdevs_list": [ 00:17:27.375 { 00:17:27.376 "name": "BaseBdev1", 00:17:27.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.376 "is_configured": false, 00:17:27.376 "data_offset": 0, 00:17:27.376 "data_size": 0 00:17:27.376 }, 00:17:27.376 { 00:17:27.376 "name": "BaseBdev2", 00:17:27.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.376 "is_configured": false, 00:17:27.376 "data_offset": 0, 00:17:27.376 "data_size": 0 00:17:27.376 } 00:17:27.376 ] 00:17:27.376 }' 00:17:27.376 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.376 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.636 [2024-12-12 19:46:10.421056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.636 [2024-12-12 19:46:10.421130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.636 [2024-12-12 19:46:10.433039] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.636 [2024-12-12 19:46:10.433115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.636 [2024-12-12 19:46:10.433142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.636 [2024-12-12 19:46:10.433166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.636 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.636 [2024-12-12 19:46:10.479434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.895 BaseBdev1 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.895 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.895 [ 00:17:27.895 { 00:17:27.895 "name": "BaseBdev1", 00:17:27.895 "aliases": [ 00:17:27.895 "0f4ab9c7-c97c-4414-b190-019c7092e041" 00:17:27.895 ], 00:17:27.895 "product_name": "Malloc disk", 00:17:27.895 "block_size": 4096, 00:17:27.895 "num_blocks": 8192, 00:17:27.895 "uuid": "0f4ab9c7-c97c-4414-b190-019c7092e041", 00:17:27.895 "md_size": 32, 00:17:27.895 "md_interleave": false, 00:17:27.895 "dif_type": 0, 00:17:27.895 "assigned_rate_limits": { 00:17:27.895 "rw_ios_per_sec": 0, 00:17:27.895 "rw_mbytes_per_sec": 0, 00:17:27.895 "r_mbytes_per_sec": 0, 00:17:27.895 "w_mbytes_per_sec": 0 00:17:27.895 }, 00:17:27.895 "claimed": true, 00:17:27.895 "claim_type": "exclusive_write", 00:17:27.895 "zoned": false, 00:17:27.895 "supported_io_types": { 00:17:27.895 "read": true, 00:17:27.895 "write": true, 00:17:27.895 "unmap": true, 00:17:27.895 "flush": true, 00:17:27.895 "reset": true, 00:17:27.895 "nvme_admin": false, 00:17:27.895 "nvme_io": false, 00:17:27.895 "nvme_io_md": false, 00:17:27.895 "write_zeroes": true, 00:17:27.895 "zcopy": true, 00:17:27.895 "get_zone_info": false, 00:17:27.895 "zone_management": false, 00:17:27.895 "zone_append": false, 00:17:27.895 "compare": false, 00:17:27.895 "compare_and_write": false, 00:17:27.895 "abort": true, 00:17:27.895 "seek_hole": false, 00:17:27.895 "seek_data": false, 00:17:27.895 "copy": true, 00:17:27.895 "nvme_iov_md": false 00:17:27.896 }, 00:17:27.896 "memory_domains": [ 00:17:27.896 { 00:17:27.896 "dma_device_id": "system", 00:17:27.896 "dma_device_type": 1 00:17:27.896 }, 00:17:27.896 { 00:17:27.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.896 "dma_device_type": 2 00:17:27.896 } 00:17:27.896 ], 00:17:27.896 "driver_specific": {} 00:17:27.896 } 00:17:27.896 ] 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.896 "name": "Existed_Raid", 00:17:27.896 "uuid": "865fcc7b-0687-419b-82b2-9f03ab925850", 00:17:27.896 "strip_size_kb": 0, 00:17:27.896 "state": "configuring", 00:17:27.896 "raid_level": "raid1", 00:17:27.896 "superblock": true, 00:17:27.896 "num_base_bdevs": 2, 00:17:27.896 "num_base_bdevs_discovered": 1, 00:17:27.896 "num_base_bdevs_operational": 2, 00:17:27.896 "base_bdevs_list": [ 00:17:27.896 { 00:17:27.896 "name": "BaseBdev1", 00:17:27.896 "uuid": "0f4ab9c7-c97c-4414-b190-019c7092e041", 00:17:27.896 "is_configured": true, 00:17:27.896 "data_offset": 256, 00:17:27.896 "data_size": 7936 00:17:27.896 }, 00:17:27.896 { 00:17:27.896 "name": "BaseBdev2", 00:17:27.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.896 "is_configured": false, 00:17:27.896 "data_offset": 0, 00:17:27.896 "data_size": 0 00:17:27.896 } 00:17:27.896 ] 00:17:27.896 }' 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.896 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.155 [2024-12-12 19:46:10.982616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.155 [2024-12-12 19:46:10.982692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.155 [2024-12-12 19:46:10.990644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.155 [2024-12-12 19:46:10.992391] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.155 [2024-12-12 19:46:10.992440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.155 19:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.414 "name": "Existed_Raid", 00:17:28.414 "uuid": "7107866e-674e-4d54-beca-af59149fb4b4", 00:17:28.414 "strip_size_kb": 0, 00:17:28.414 "state": "configuring", 00:17:28.414 "raid_level": "raid1", 00:17:28.414 "superblock": true, 00:17:28.414 "num_base_bdevs": 2, 00:17:28.414 "num_base_bdevs_discovered": 1, 00:17:28.414 "num_base_bdevs_operational": 2, 00:17:28.414 "base_bdevs_list": [ 00:17:28.414 { 00:17:28.414 "name": "BaseBdev1", 00:17:28.414 "uuid": "0f4ab9c7-c97c-4414-b190-019c7092e041", 00:17:28.414 "is_configured": true, 00:17:28.414 "data_offset": 256, 00:17:28.414 "data_size": 7936 00:17:28.414 }, 00:17:28.414 { 00:17:28.414 "name": "BaseBdev2", 00:17:28.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.414 "is_configured": false, 00:17:28.414 "data_offset": 0, 00:17:28.414 "data_size": 0 00:17:28.414 } 00:17:28.414 ] 00:17:28.414 }' 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.414 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.674 [2024-12-12 19:46:11.483173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.674 [2024-12-12 19:46:11.484019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:28.674 [2024-12-12 19:46:11.484163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.674 [2024-12-12 19:46:11.484499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:28.674 BaseBdev2 00:17:28.674 [2024-12-12 19:46:11.485018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:28.674 [2024-12-12 19:46:11.485159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.674 [2024-12-12 19:46:11.485646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.674 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.674 [ 00:17:28.674 { 00:17:28.674 "name": "BaseBdev2", 00:17:28.674 "aliases": [ 00:17:28.674 "50dbd9d7-f311-4c18-89ec-bc024b837c76" 00:17:28.674 ], 00:17:28.674 "product_name": "Malloc disk", 00:17:28.674 "block_size": 4096, 00:17:28.674 "num_blocks": 8192, 00:17:28.674 "uuid": "50dbd9d7-f311-4c18-89ec-bc024b837c76", 00:17:28.674 "md_size": 32, 00:17:28.674 "md_interleave": false, 00:17:28.674 "dif_type": 0, 00:17:28.674 "assigned_rate_limits": { 00:17:28.674 "rw_ios_per_sec": 0, 00:17:28.674 "rw_mbytes_per_sec": 0, 00:17:28.674 "r_mbytes_per_sec": 0, 00:17:28.674 "w_mbytes_per_sec": 0 00:17:28.674 }, 00:17:28.674 "claimed": true, 00:17:28.674 "claim_type": "exclusive_write", 00:17:28.674 "zoned": false, 00:17:28.674 "supported_io_types": { 00:17:28.674 "read": true, 00:17:28.674 "write": true, 00:17:28.674 "unmap": true, 00:17:28.674 "flush": true, 00:17:28.934 "reset": true, 00:17:28.934 "nvme_admin": false, 00:17:28.934 "nvme_io": false, 00:17:28.934 "nvme_io_md": false, 00:17:28.934 "write_zeroes": true, 00:17:28.934 "zcopy": true, 00:17:28.934 "get_zone_info": false, 00:17:28.934 "zone_management": false, 00:17:28.934 "zone_append": false, 00:17:28.934 "compare": false, 00:17:28.934 "compare_and_write": false, 00:17:28.934 "abort": true, 00:17:28.934 "seek_hole": false, 00:17:28.934 "seek_data": false, 00:17:28.934 "copy": true, 00:17:28.934 "nvme_iov_md": false 00:17:28.934 }, 00:17:28.934 "memory_domains": [ 00:17:28.934 { 00:17:28.934 "dma_device_id": "system", 00:17:28.934 "dma_device_type": 1 00:17:28.934 }, 00:17:28.934 { 00:17:28.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.934 "dma_device_type": 2 00:17:28.934 } 00:17:28.934 ], 00:17:28.934 "driver_specific": {} 00:17:28.934 } 00:17:28.934 ] 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.934 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.935 "name": "Existed_Raid", 00:17:28.935 "uuid": "7107866e-674e-4d54-beca-af59149fb4b4", 00:17:28.935 "strip_size_kb": 0, 00:17:28.935 "state": "online", 00:17:28.935 "raid_level": "raid1", 00:17:28.935 "superblock": true, 00:17:28.935 "num_base_bdevs": 2, 00:17:28.935 "num_base_bdevs_discovered": 2, 00:17:28.935 "num_base_bdevs_operational": 2, 00:17:28.935 "base_bdevs_list": [ 00:17:28.935 { 00:17:28.935 "name": "BaseBdev1", 00:17:28.935 "uuid": "0f4ab9c7-c97c-4414-b190-019c7092e041", 00:17:28.935 "is_configured": true, 00:17:28.935 "data_offset": 256, 00:17:28.935 "data_size": 7936 00:17:28.935 }, 00:17:28.935 { 00:17:28.935 "name": "BaseBdev2", 00:17:28.935 "uuid": "50dbd9d7-f311-4c18-89ec-bc024b837c76", 00:17:28.935 "is_configured": true, 00:17:28.935 "data_offset": 256, 00:17:28.935 "data_size": 7936 00:17:28.935 } 00:17:28.935 ] 00:17:28.935 }' 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.935 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.194 19:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.194 [2024-12-12 19:46:11.998670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.194 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.194 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.194 "name": "Existed_Raid", 00:17:29.194 "aliases": [ 00:17:29.194 "7107866e-674e-4d54-beca-af59149fb4b4" 00:17:29.194 ], 00:17:29.194 "product_name": "Raid Volume", 00:17:29.194 "block_size": 4096, 00:17:29.194 "num_blocks": 7936, 00:17:29.194 "uuid": "7107866e-674e-4d54-beca-af59149fb4b4", 00:17:29.194 "md_size": 32, 00:17:29.194 "md_interleave": false, 00:17:29.194 "dif_type": 0, 00:17:29.194 "assigned_rate_limits": { 00:17:29.194 "rw_ios_per_sec": 0, 00:17:29.194 "rw_mbytes_per_sec": 0, 00:17:29.194 "r_mbytes_per_sec": 0, 00:17:29.194 "w_mbytes_per_sec": 0 00:17:29.194 }, 00:17:29.194 "claimed": false, 00:17:29.194 "zoned": false, 00:17:29.194 "supported_io_types": { 00:17:29.194 "read": true, 00:17:29.194 "write": true, 00:17:29.194 "unmap": false, 00:17:29.195 "flush": false, 00:17:29.195 "reset": true, 00:17:29.195 "nvme_admin": false, 00:17:29.195 "nvme_io": false, 00:17:29.195 "nvme_io_md": false, 00:17:29.195 "write_zeroes": true, 00:17:29.195 "zcopy": false, 00:17:29.195 "get_zone_info": false, 00:17:29.195 "zone_management": false, 00:17:29.195 "zone_append": false, 00:17:29.195 "compare": false, 00:17:29.195 "compare_and_write": false, 00:17:29.195 "abort": false, 00:17:29.195 "seek_hole": false, 00:17:29.195 "seek_data": false, 00:17:29.195 "copy": false, 00:17:29.195 "nvme_iov_md": false 00:17:29.195 }, 00:17:29.195 "memory_domains": [ 00:17:29.195 { 00:17:29.195 "dma_device_id": "system", 00:17:29.195 "dma_device_type": 1 00:17:29.195 }, 00:17:29.195 { 00:17:29.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.195 "dma_device_type": 2 00:17:29.195 }, 00:17:29.195 { 00:17:29.195 "dma_device_id": "system", 00:17:29.195 "dma_device_type": 1 00:17:29.195 }, 00:17:29.195 { 00:17:29.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.195 "dma_device_type": 2 00:17:29.195 } 00:17:29.195 ], 00:17:29.195 "driver_specific": { 00:17:29.195 "raid": { 00:17:29.195 "uuid": "7107866e-674e-4d54-beca-af59149fb4b4", 00:17:29.195 "strip_size_kb": 0, 00:17:29.195 "state": "online", 00:17:29.195 "raid_level": "raid1", 00:17:29.195 "superblock": true, 00:17:29.195 "num_base_bdevs": 2, 00:17:29.195 "num_base_bdevs_discovered": 2, 00:17:29.195 "num_base_bdevs_operational": 2, 00:17:29.195 "base_bdevs_list": [ 00:17:29.195 { 00:17:29.195 "name": "BaseBdev1", 00:17:29.195 "uuid": "0f4ab9c7-c97c-4414-b190-019c7092e041", 00:17:29.195 "is_configured": true, 00:17:29.195 "data_offset": 256, 00:17:29.195 "data_size": 7936 00:17:29.195 }, 00:17:29.195 { 00:17:29.195 "name": "BaseBdev2", 00:17:29.195 "uuid": "50dbd9d7-f311-4c18-89ec-bc024b837c76", 00:17:29.195 "is_configured": true, 00:17:29.195 "data_offset": 256, 00:17:29.195 "data_size": 7936 00:17:29.195 } 00:17:29.195 ] 00:17:29.195 } 00:17:29.195 } 00:17:29.195 }' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:29.454 BaseBdev2' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.454 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.455 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:29.455 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.455 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.455 [2024-12-12 19:46:12.206075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.714 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.715 "name": "Existed_Raid", 00:17:29.715 "uuid": "7107866e-674e-4d54-beca-af59149fb4b4", 00:17:29.715 "strip_size_kb": 0, 00:17:29.715 "state": "online", 00:17:29.715 "raid_level": "raid1", 00:17:29.715 "superblock": true, 00:17:29.715 "num_base_bdevs": 2, 00:17:29.715 "num_base_bdevs_discovered": 1, 00:17:29.715 "num_base_bdevs_operational": 1, 00:17:29.715 "base_bdevs_list": [ 00:17:29.715 { 00:17:29.715 "name": null, 00:17:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.715 "is_configured": false, 00:17:29.715 "data_offset": 0, 00:17:29.715 "data_size": 7936 00:17:29.715 }, 00:17:29.715 { 00:17:29.715 "name": "BaseBdev2", 00:17:29.715 "uuid": "50dbd9d7-f311-4c18-89ec-bc024b837c76", 00:17:29.715 "is_configured": true, 00:17:29.715 "data_offset": 256, 00:17:29.715 "data_size": 7936 00:17:29.715 } 00:17:29.715 ] 00:17:29.715 }' 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.715 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.975 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.975 [2024-12-12 19:46:12.792491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.975 [2024-12-12 19:46:12.792611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.235 [2024-12-12 19:46:12.891166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.235 [2024-12-12 19:46:12.891286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.235 [2024-12-12 19:46:12.891325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88835 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88835 ']' 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88835 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88835 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.235 killing process with pid 88835 00:17:30.235 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.236 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88835' 00:17:30.236 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88835 00:17:30.236 [2024-12-12 19:46:12.995424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.236 19:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88835 00:17:30.236 [2024-12-12 19:46:13.011243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.616 19:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:31.616 00:17:31.616 real 0m5.011s 00:17:31.616 user 0m7.206s 00:17:31.616 sys 0m0.882s 00:17:31.616 19:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.616 19:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.616 ************************************ 00:17:31.616 END TEST raid_state_function_test_sb_md_separate 00:17:31.616 ************************************ 00:17:31.616 19:46:14 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:31.616 19:46:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.616 19:46:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.616 19:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.616 ************************************ 00:17:31.616 START TEST raid_superblock_test_md_separate 00:17:31.616 ************************************ 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=89082 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 89082 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89082 ']' 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.616 19:46:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.616 [2024-12-12 19:46:14.237071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:31.616 [2024-12-12 19:46:14.237319] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89082 ] 00:17:31.616 [2024-12-12 19:46:14.407980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.875 [2024-12-12 19:46:14.514556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.875 [2024-12-12 19:46:14.708089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.875 [2024-12-12 19:46:14.708170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.447 malloc1 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.447 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.447 [2024-12-12 19:46:15.111172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.447 [2024-12-12 19:46:15.111265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.447 [2024-12-12 19:46:15.111291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:32.447 [2024-12-12 19:46:15.111301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.448 [2024-12-12 19:46:15.113104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.448 [2024-12-12 19:46:15.113144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.448 pt1 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.448 malloc2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.448 [2024-12-12 19:46:15.164158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.448 [2024-12-12 19:46:15.164258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.448 [2024-12-12 19:46:15.164295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:32.448 [2024-12-12 19:46:15.164322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.448 [2024-12-12 19:46:15.166113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.448 [2024-12-12 19:46:15.166184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.448 pt2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.448 [2024-12-12 19:46:15.176164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.448 [2024-12-12 19:46:15.177837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.448 [2024-12-12 19:46:15.178039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:32.448 [2024-12-12 19:46:15.178084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.448 [2024-12-12 19:46:15.178225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:32.448 [2024-12-12 19:46:15.178423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:32.448 [2024-12-12 19:46:15.178470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:32.448 [2024-12-12 19:46:15.178623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.448 "name": "raid_bdev1", 00:17:32.448 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:32.448 "strip_size_kb": 0, 00:17:32.448 "state": "online", 00:17:32.448 "raid_level": "raid1", 00:17:32.448 "superblock": true, 00:17:32.448 "num_base_bdevs": 2, 00:17:32.448 "num_base_bdevs_discovered": 2, 00:17:32.448 "num_base_bdevs_operational": 2, 00:17:32.448 "base_bdevs_list": [ 00:17:32.448 { 00:17:32.448 "name": "pt1", 00:17:32.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.448 "is_configured": true, 00:17:32.448 "data_offset": 256, 00:17:32.448 "data_size": 7936 00:17:32.448 }, 00:17:32.448 { 00:17:32.448 "name": "pt2", 00:17:32.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.448 "is_configured": true, 00:17:32.448 "data_offset": 256, 00:17:32.448 "data_size": 7936 00:17:32.448 } 00:17:32.448 ] 00:17:32.448 }' 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.448 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.018 [2024-12-12 19:46:15.663564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.018 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.018 "name": "raid_bdev1", 00:17:33.018 "aliases": [ 00:17:33.018 "cc0fb2be-abff-4e34-aa9c-373d17fbf302" 00:17:33.018 ], 00:17:33.018 "product_name": "Raid Volume", 00:17:33.018 "block_size": 4096, 00:17:33.018 "num_blocks": 7936, 00:17:33.018 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:33.018 "md_size": 32, 00:17:33.018 "md_interleave": false, 00:17:33.018 "dif_type": 0, 00:17:33.018 "assigned_rate_limits": { 00:17:33.018 "rw_ios_per_sec": 0, 00:17:33.018 "rw_mbytes_per_sec": 0, 00:17:33.018 "r_mbytes_per_sec": 0, 00:17:33.018 "w_mbytes_per_sec": 0 00:17:33.018 }, 00:17:33.018 "claimed": false, 00:17:33.018 "zoned": false, 00:17:33.018 "supported_io_types": { 00:17:33.018 "read": true, 00:17:33.018 "write": true, 00:17:33.018 "unmap": false, 00:17:33.018 "flush": false, 00:17:33.018 "reset": true, 00:17:33.018 "nvme_admin": false, 00:17:33.018 "nvme_io": false, 00:17:33.018 "nvme_io_md": false, 00:17:33.018 "write_zeroes": true, 00:17:33.018 "zcopy": false, 00:17:33.018 "get_zone_info": false, 00:17:33.018 "zone_management": false, 00:17:33.018 "zone_append": false, 00:17:33.018 "compare": false, 00:17:33.018 "compare_and_write": false, 00:17:33.018 "abort": false, 00:17:33.018 "seek_hole": false, 00:17:33.018 "seek_data": false, 00:17:33.018 "copy": false, 00:17:33.018 "nvme_iov_md": false 00:17:33.018 }, 00:17:33.018 "memory_domains": [ 00:17:33.018 { 00:17:33.018 "dma_device_id": "system", 00:17:33.018 "dma_device_type": 1 00:17:33.018 }, 00:17:33.018 { 00:17:33.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.018 "dma_device_type": 2 00:17:33.018 }, 00:17:33.018 { 00:17:33.018 "dma_device_id": "system", 00:17:33.018 "dma_device_type": 1 00:17:33.018 }, 00:17:33.018 { 00:17:33.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.018 "dma_device_type": 2 00:17:33.018 } 00:17:33.018 ], 00:17:33.018 "driver_specific": { 00:17:33.018 "raid": { 00:17:33.018 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:33.018 "strip_size_kb": 0, 00:17:33.018 "state": "online", 00:17:33.018 "raid_level": "raid1", 00:17:33.018 "superblock": true, 00:17:33.018 "num_base_bdevs": 2, 00:17:33.019 "num_base_bdevs_discovered": 2, 00:17:33.019 "num_base_bdevs_operational": 2, 00:17:33.019 "base_bdevs_list": [ 00:17:33.019 { 00:17:33.019 "name": "pt1", 00:17:33.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.019 "is_configured": true, 00:17:33.019 "data_offset": 256, 00:17:33.019 "data_size": 7936 00:17:33.019 }, 00:17:33.019 { 00:17:33.019 "name": "pt2", 00:17:33.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.019 "is_configured": true, 00:17:33.019 "data_offset": 256, 00:17:33.019 "data_size": 7936 00:17:33.019 } 00:17:33.019 ] 00:17:33.019 } 00:17:33.019 } 00:17:33.019 }' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.019 pt2' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.019 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 [2024-12-12 19:46:15.903122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cc0fb2be-abff-4e34-aa9c-373d17fbf302 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z cc0fb2be-abff-4e34-aa9c-373d17fbf302 ']' 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 [2024-12-12 19:46:15.946797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.278 [2024-12-12 19:46:15.946855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.278 [2024-12-12 19:46:15.946950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.278 [2024-12-12 19:46:15.947019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.278 [2024-12-12 19:46:15.947053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 [2024-12-12 19:46:16.094600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:33.278 [2024-12-12 19:46:16.096485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:33.278 [2024-12-12 19:46:16.096637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:33.278 [2024-12-12 19:46:16.096696] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:33.278 [2024-12-12 19:46:16.096710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.278 [2024-12-12 19:46:16.096719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:33.278 request: 00:17:33.278 { 00:17:33.278 "name": "raid_bdev1", 00:17:33.278 "raid_level": "raid1", 00:17:33.278 "base_bdevs": [ 00:17:33.278 "malloc1", 00:17:33.278 "malloc2" 00:17:33.278 ], 00:17:33.278 "superblock": false, 00:17:33.278 "method": "bdev_raid_create", 00:17:33.278 "req_id": 1 00:17:33.278 } 00:17:33.278 Got JSON-RPC error response 00:17:33.278 response: 00:17:33.278 { 00:17:33.278 "code": -17, 00:17:33.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.278 } 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.537 [2024-12-12 19:46:16.158454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.537 [2024-12-12 19:46:16.158548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.537 [2024-12-12 19:46:16.158582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.537 [2024-12-12 19:46:16.158615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.537 [2024-12-12 19:46:16.160480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.537 [2024-12-12 19:46:16.160558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.537 [2024-12-12 19:46:16.160624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.537 [2024-12-12 19:46:16.160692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.537 pt1 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.537 "name": "raid_bdev1", 00:17:33.537 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:33.537 "strip_size_kb": 0, 00:17:33.537 "state": "configuring", 00:17:33.537 "raid_level": "raid1", 00:17:33.537 "superblock": true, 00:17:33.537 "num_base_bdevs": 2, 00:17:33.537 "num_base_bdevs_discovered": 1, 00:17:33.537 "num_base_bdevs_operational": 2, 00:17:33.537 "base_bdevs_list": [ 00:17:33.537 { 00:17:33.537 "name": "pt1", 00:17:33.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.537 "is_configured": true, 00:17:33.537 "data_offset": 256, 00:17:33.537 "data_size": 7936 00:17:33.537 }, 00:17:33.537 { 00:17:33.537 "name": null, 00:17:33.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.537 "is_configured": false, 00:17:33.537 "data_offset": 256, 00:17:33.537 "data_size": 7936 00:17:33.537 } 00:17:33.537 ] 00:17:33.537 }' 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.537 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 [2024-12-12 19:46:16.605909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.796 [2024-12-12 19:46:16.606002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.796 [2024-12-12 19:46:16.606038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:33.796 [2024-12-12 19:46:16.606066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.796 [2024-12-12 19:46:16.606283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.796 [2024-12-12 19:46:16.606331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.796 [2024-12-12 19:46:16.606383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.796 [2024-12-12 19:46:16.606408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.796 [2024-12-12 19:46:16.606516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:33.796 [2024-12-12 19:46:16.606526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.796 [2024-12-12 19:46:16.606614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:33.796 [2024-12-12 19:46:16.606720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:33.796 [2024-12-12 19:46:16.606728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:33.796 [2024-12-12 19:46:16.606829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.796 pt2 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.055 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.055 "name": "raid_bdev1", 00:17:34.055 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:34.055 "strip_size_kb": 0, 00:17:34.055 "state": "online", 00:17:34.055 "raid_level": "raid1", 00:17:34.055 "superblock": true, 00:17:34.055 "num_base_bdevs": 2, 00:17:34.055 "num_base_bdevs_discovered": 2, 00:17:34.055 "num_base_bdevs_operational": 2, 00:17:34.055 "base_bdevs_list": [ 00:17:34.055 { 00:17:34.055 "name": "pt1", 00:17:34.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.055 "is_configured": true, 00:17:34.055 "data_offset": 256, 00:17:34.055 "data_size": 7936 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": "pt2", 00:17:34.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.055 "is_configured": true, 00:17:34.055 "data_offset": 256, 00:17:34.055 "data_size": 7936 00:17:34.055 } 00:17:34.055 ] 00:17:34.055 }' 00:17:34.055 19:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.055 19:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.314 [2024-12-12 19:46:17.085441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.314 "name": "raid_bdev1", 00:17:34.314 "aliases": [ 00:17:34.314 "cc0fb2be-abff-4e34-aa9c-373d17fbf302" 00:17:34.314 ], 00:17:34.314 "product_name": "Raid Volume", 00:17:34.314 "block_size": 4096, 00:17:34.314 "num_blocks": 7936, 00:17:34.314 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:34.314 "md_size": 32, 00:17:34.314 "md_interleave": false, 00:17:34.314 "dif_type": 0, 00:17:34.314 "assigned_rate_limits": { 00:17:34.314 "rw_ios_per_sec": 0, 00:17:34.314 "rw_mbytes_per_sec": 0, 00:17:34.314 "r_mbytes_per_sec": 0, 00:17:34.314 "w_mbytes_per_sec": 0 00:17:34.314 }, 00:17:34.314 "claimed": false, 00:17:34.314 "zoned": false, 00:17:34.314 "supported_io_types": { 00:17:34.314 "read": true, 00:17:34.314 "write": true, 00:17:34.314 "unmap": false, 00:17:34.314 "flush": false, 00:17:34.314 "reset": true, 00:17:34.314 "nvme_admin": false, 00:17:34.314 "nvme_io": false, 00:17:34.314 "nvme_io_md": false, 00:17:34.314 "write_zeroes": true, 00:17:34.314 "zcopy": false, 00:17:34.314 "get_zone_info": false, 00:17:34.314 "zone_management": false, 00:17:34.314 "zone_append": false, 00:17:34.314 "compare": false, 00:17:34.314 "compare_and_write": false, 00:17:34.314 "abort": false, 00:17:34.314 "seek_hole": false, 00:17:34.314 "seek_data": false, 00:17:34.314 "copy": false, 00:17:34.314 "nvme_iov_md": false 00:17:34.314 }, 00:17:34.314 "memory_domains": [ 00:17:34.314 { 00:17:34.314 "dma_device_id": "system", 00:17:34.314 "dma_device_type": 1 00:17:34.314 }, 00:17:34.314 { 00:17:34.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.314 "dma_device_type": 2 00:17:34.314 }, 00:17:34.314 { 00:17:34.314 "dma_device_id": "system", 00:17:34.314 "dma_device_type": 1 00:17:34.314 }, 00:17:34.314 { 00:17:34.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.314 "dma_device_type": 2 00:17:34.314 } 00:17:34.314 ], 00:17:34.314 "driver_specific": { 00:17:34.314 "raid": { 00:17:34.314 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:34.314 "strip_size_kb": 0, 00:17:34.314 "state": "online", 00:17:34.314 "raid_level": "raid1", 00:17:34.314 "superblock": true, 00:17:34.314 "num_base_bdevs": 2, 00:17:34.314 "num_base_bdevs_discovered": 2, 00:17:34.314 "num_base_bdevs_operational": 2, 00:17:34.314 "base_bdevs_list": [ 00:17:34.314 { 00:17:34.314 "name": "pt1", 00:17:34.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.314 "is_configured": true, 00:17:34.314 "data_offset": 256, 00:17:34.314 "data_size": 7936 00:17:34.314 }, 00:17:34.314 { 00:17:34.314 "name": "pt2", 00:17:34.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.314 "is_configured": true, 00:17:34.314 "data_offset": 256, 00:17:34.314 "data_size": 7936 00:17:34.314 } 00:17:34.314 ] 00:17:34.314 } 00:17:34.314 } 00:17:34.314 }' 00:17:34.314 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.574 pt2' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.574 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.575 [2024-12-12 19:46:17.325005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' cc0fb2be-abff-4e34-aa9c-373d17fbf302 '!=' cc0fb2be-abff-4e34-aa9c-373d17fbf302 ']' 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.575 [2024-12-12 19:46:17.368722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.575 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.834 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.834 "name": "raid_bdev1", 00:17:34.834 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:34.834 "strip_size_kb": 0, 00:17:34.834 "state": "online", 00:17:34.834 "raid_level": "raid1", 00:17:34.834 "superblock": true, 00:17:34.834 "num_base_bdevs": 2, 00:17:34.834 "num_base_bdevs_discovered": 1, 00:17:34.834 "num_base_bdevs_operational": 1, 00:17:34.834 "base_bdevs_list": [ 00:17:34.834 { 00:17:34.834 "name": null, 00:17:34.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.834 "is_configured": false, 00:17:34.834 "data_offset": 0, 00:17:34.834 "data_size": 7936 00:17:34.834 }, 00:17:34.834 { 00:17:34.834 "name": "pt2", 00:17:34.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.834 "is_configured": true, 00:17:34.834 "data_offset": 256, 00:17:34.834 "data_size": 7936 00:17:34.834 } 00:17:34.834 ] 00:17:34.834 }' 00:17:34.834 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.834 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 [2024-12-12 19:46:17.831895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.093 [2024-12-12 19:46:17.831955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.093 [2024-12-12 19:46:17.832032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.093 [2024-12-12 19:46:17.832089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.093 [2024-12-12 19:46:17.832168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 [2024-12-12 19:46:17.903784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.093 [2024-12-12 19:46:17.903872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.093 [2024-12-12 19:46:17.903902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:35.093 [2024-12-12 19:46:17.903933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.093 [2024-12-12 19:46:17.905834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.093 [2024-12-12 19:46:17.905914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.093 [2024-12-12 19:46:17.905996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.093 [2024-12-12 19:46:17.906068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.093 [2024-12-12 19:46:17.906202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:35.093 [2024-12-12 19:46:17.906238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.093 [2024-12-12 19:46:17.906383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:35.093 [2024-12-12 19:46:17.906532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:35.093 [2024-12-12 19:46:17.906585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:35.093 [2024-12-12 19:46:17.906719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.093 pt2 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.093 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.094 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.352 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.352 "name": "raid_bdev1", 00:17:35.352 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:35.352 "strip_size_kb": 0, 00:17:35.352 "state": "online", 00:17:35.352 "raid_level": "raid1", 00:17:35.352 "superblock": true, 00:17:35.352 "num_base_bdevs": 2, 00:17:35.352 "num_base_bdevs_discovered": 1, 00:17:35.352 "num_base_bdevs_operational": 1, 00:17:35.352 "base_bdevs_list": [ 00:17:35.352 { 00:17:35.352 "name": null, 00:17:35.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.352 "is_configured": false, 00:17:35.352 "data_offset": 256, 00:17:35.352 "data_size": 7936 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "pt2", 00:17:35.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 256, 00:17:35.352 "data_size": 7936 00:17:35.352 } 00:17:35.352 ] 00:17:35.352 }' 00:17:35.352 19:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.352 19:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 [2024-12-12 19:46:18.295143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.612 [2024-12-12 19:46:18.295208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.612 [2024-12-12 19:46:18.295304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.612 [2024-12-12 19:46:18.295385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.612 [2024-12-12 19:46:18.295425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 [2024-12-12 19:46:18.339086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.612 [2024-12-12 19:46:18.339174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.612 [2024-12-12 19:46:18.339210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:35.612 [2024-12-12 19:46:18.339239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.612 [2024-12-12 19:46:18.341130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.612 [2024-12-12 19:46:18.341199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.612 [2024-12-12 19:46:18.341267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.612 [2024-12-12 19:46:18.341327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.612 [2024-12-12 19:46:18.341500] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:35.612 [2024-12-12 19:46:18.341563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.612 [2024-12-12 19:46:18.341607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:35.612 [2024-12-12 19:46:18.341730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.612 [2024-12-12 19:46:18.341839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:35.612 [2024-12-12 19:46:18.341873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.612 [2024-12-12 19:46:18.341956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:35.612 [2024-12-12 19:46:18.342094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:35.612 [2024-12-12 19:46:18.342130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:35.612 [2024-12-12 19:46:18.342304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.612 pt1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.612 "name": "raid_bdev1", 00:17:35.612 "uuid": "cc0fb2be-abff-4e34-aa9c-373d17fbf302", 00:17:35.612 "strip_size_kb": 0, 00:17:35.612 "state": "online", 00:17:35.612 "raid_level": "raid1", 00:17:35.612 "superblock": true, 00:17:35.612 "num_base_bdevs": 2, 00:17:35.612 "num_base_bdevs_discovered": 1, 00:17:35.612 "num_base_bdevs_operational": 1, 00:17:35.612 "base_bdevs_list": [ 00:17:35.612 { 00:17:35.612 "name": null, 00:17:35.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.612 "is_configured": false, 00:17:35.612 "data_offset": 256, 00:17:35.612 "data_size": 7936 00:17:35.612 }, 00:17:35.612 { 00:17:35.612 "name": "pt2", 00:17:35.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.612 "is_configured": true, 00:17:35.612 "data_offset": 256, 00:17:35.612 "data_size": 7936 00:17:35.612 } 00:17:35.612 ] 00:17:35.612 }' 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.612 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.182 [2024-12-12 19:46:18.774620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' cc0fb2be-abff-4e34-aa9c-373d17fbf302 '!=' cc0fb2be-abff-4e34-aa9c-373d17fbf302 ']' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 89082 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89082 ']' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 89082 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89082 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.182 killing process with pid 89082 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89082' 00:17:36.182 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 89082 00:17:36.182 [2024-12-12 19:46:18.838752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.182 [2024-12-12 19:46:18.838815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.183 [2024-12-12 19:46:18.838850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.183 [2024-12-12 19:46:18.838864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:36.183 19:46:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 89082 00:17:36.441 [2024-12-12 19:46:19.044554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.380 ************************************ 00:17:37.380 END TEST raid_superblock_test_md_separate 00:17:37.380 ************************************ 00:17:37.380 19:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:37.380 00:17:37.380 real 0m5.971s 00:17:37.380 user 0m8.992s 00:17:37.380 sys 0m1.140s 00:17:37.380 19:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.380 19:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.380 19:46:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:37.380 19:46:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:37.380 19:46:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:37.380 19:46:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.380 19:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.380 ************************************ 00:17:37.380 START TEST raid_rebuild_test_sb_md_separate 00:17:37.380 ************************************ 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=89405 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 89405 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89405 ']' 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.380 19:46:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.638 [2024-12-12 19:46:20.307990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:37.638 [2024-12-12 19:46:20.308192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89405 ] 00:17:37.638 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.638 Zero copy mechanism will not be used. 00:17:37.897 [2024-12-12 19:46:20.484867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.897 [2024-12-12 19:46:20.597328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.155 [2024-12-12 19:46:20.786864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.155 [2024-12-12 19:46:20.786994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.415 BaseBdev1_malloc 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.415 [2024-12-12 19:46:21.180102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.415 [2024-12-12 19:46:21.180162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.415 [2024-12-12 19:46:21.180183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:38.415 [2024-12-12 19:46:21.180195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.415 [2024-12-12 19:46:21.182033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.415 [2024-12-12 19:46:21.182073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.415 BaseBdev1 00:17:38.415 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.416 BaseBdev2_malloc 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.416 [2024-12-12 19:46:21.235781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:38.416 [2024-12-12 19:46:21.235839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.416 [2024-12-12 19:46:21.235858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:38.416 [2024-12-12 19:46:21.235870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.416 [2024-12-12 19:46:21.237702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.416 [2024-12-12 19:46:21.237739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.416 BaseBdev2 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.416 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.675 spare_malloc 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.675 spare_delay 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.675 [2024-12-12 19:46:21.314823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.675 [2024-12-12 19:46:21.314921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.675 [2024-12-12 19:46:21.314945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:38.675 [2024-12-12 19:46:21.314955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.675 [2024-12-12 19:46:21.316769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.675 [2024-12-12 19:46:21.316809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.675 spare 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.675 [2024-12-12 19:46:21.326841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.675 [2024-12-12 19:46:21.328535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.675 [2024-12-12 19:46:21.328719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:38.675 [2024-12-12 19:46:21.328739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:38.675 [2024-12-12 19:46:21.328808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:38.675 [2024-12-12 19:46:21.328942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:38.675 [2024-12-12 19:46:21.328951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:38.675 [2024-12-12 19:46:21.329043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.675 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.675 "name": "raid_bdev1", 00:17:38.675 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:38.675 "strip_size_kb": 0, 00:17:38.675 "state": "online", 00:17:38.675 "raid_level": "raid1", 00:17:38.675 "superblock": true, 00:17:38.675 "num_base_bdevs": 2, 00:17:38.675 "num_base_bdevs_discovered": 2, 00:17:38.675 "num_base_bdevs_operational": 2, 00:17:38.675 "base_bdevs_list": [ 00:17:38.675 { 00:17:38.675 "name": "BaseBdev1", 00:17:38.675 "uuid": "ab60e633-5959-5164-a185-838df02b5d3d", 00:17:38.675 "is_configured": true, 00:17:38.675 "data_offset": 256, 00:17:38.676 "data_size": 7936 00:17:38.676 }, 00:17:38.676 { 00:17:38.676 "name": "BaseBdev2", 00:17:38.676 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:38.676 "is_configured": true, 00:17:38.676 "data_offset": 256, 00:17:38.676 "data_size": 7936 00:17:38.676 } 00:17:38.676 ] 00:17:38.676 }' 00:17:38.676 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.676 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.935 [2024-12-12 19:46:21.758519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.935 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.195 19:46:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:39.195 [2024-12-12 19:46:22.017821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:39.195 /dev/nbd0 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.455 1+0 records in 00:17:39.455 1+0 records out 00:17:39.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327359 s, 12.5 MB/s 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:39.455 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:40.024 7936+0 records in 00:17:40.024 7936+0 records out 00:17:40.024 32505856 bytes (33 MB, 31 MiB) copied, 0.599856 s, 54.2 MB/s 00:17:40.024 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:40.024 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.024 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:40.025 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.025 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:40.025 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.025 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.284 [2024-12-12 19:46:22.887644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.284 [2024-12-12 19:46:22.905166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.284 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.284 "name": "raid_bdev1", 00:17:40.284 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:40.284 "strip_size_kb": 0, 00:17:40.284 "state": "online", 00:17:40.284 "raid_level": "raid1", 00:17:40.284 "superblock": true, 00:17:40.284 "num_base_bdevs": 2, 00:17:40.284 "num_base_bdevs_discovered": 1, 00:17:40.284 "num_base_bdevs_operational": 1, 00:17:40.284 "base_bdevs_list": [ 00:17:40.284 { 00:17:40.284 "name": null, 00:17:40.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.284 "is_configured": false, 00:17:40.284 "data_offset": 0, 00:17:40.284 "data_size": 7936 00:17:40.285 }, 00:17:40.285 { 00:17:40.285 "name": "BaseBdev2", 00:17:40.285 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:40.285 "is_configured": true, 00:17:40.285 "data_offset": 256, 00:17:40.285 "data_size": 7936 00:17:40.285 } 00:17:40.285 ] 00:17:40.285 }' 00:17:40.285 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.285 19:46:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.544 19:46:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.544 19:46:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.544 19:46:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.544 [2024-12-12 19:46:23.344489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.544 [2024-12-12 19:46:23.358400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:40.544 19:46:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.544 19:46:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.544 [2024-12-12 19:46:23.360306] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.925 "name": "raid_bdev1", 00:17:41.925 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:41.925 "strip_size_kb": 0, 00:17:41.925 "state": "online", 00:17:41.925 "raid_level": "raid1", 00:17:41.925 "superblock": true, 00:17:41.925 "num_base_bdevs": 2, 00:17:41.925 "num_base_bdevs_discovered": 2, 00:17:41.925 "num_base_bdevs_operational": 2, 00:17:41.925 "process": { 00:17:41.925 "type": "rebuild", 00:17:41.925 "target": "spare", 00:17:41.925 "progress": { 00:17:41.925 "blocks": 2560, 00:17:41.925 "percent": 32 00:17:41.925 } 00:17:41.925 }, 00:17:41.926 "base_bdevs_list": [ 00:17:41.926 { 00:17:41.926 "name": "spare", 00:17:41.926 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:41.926 "is_configured": true, 00:17:41.926 "data_offset": 256, 00:17:41.926 "data_size": 7936 00:17:41.926 }, 00:17:41.926 { 00:17:41.926 "name": "BaseBdev2", 00:17:41.926 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:41.926 "is_configured": true, 00:17:41.926 "data_offset": 256, 00:17:41.926 "data_size": 7936 00:17:41.926 } 00:17:41.926 ] 00:17:41.926 }' 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 [2024-12-12 19:46:24.520102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.926 [2024-12-12 19:46:24.565835] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.926 [2024-12-12 19:46:24.565973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.926 [2024-12-12 19:46:24.566010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.926 [2024-12-12 19:46:24.566037] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.926 "name": "raid_bdev1", 00:17:41.926 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:41.926 "strip_size_kb": 0, 00:17:41.926 "state": "online", 00:17:41.926 "raid_level": "raid1", 00:17:41.926 "superblock": true, 00:17:41.926 "num_base_bdevs": 2, 00:17:41.926 "num_base_bdevs_discovered": 1, 00:17:41.926 "num_base_bdevs_operational": 1, 00:17:41.926 "base_bdevs_list": [ 00:17:41.926 { 00:17:41.926 "name": null, 00:17:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.926 "is_configured": false, 00:17:41.926 "data_offset": 0, 00:17:41.926 "data_size": 7936 00:17:41.926 }, 00:17:41.926 { 00:17:41.926 "name": "BaseBdev2", 00:17:41.926 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:41.926 "is_configured": true, 00:17:41.926 "data_offset": 256, 00:17:41.926 "data_size": 7936 00:17:41.926 } 00:17:41.926 ] 00:17:41.926 }' 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.926 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.185 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.186 19:46:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.186 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.186 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.186 "name": "raid_bdev1", 00:17:42.186 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:42.186 "strip_size_kb": 0, 00:17:42.186 "state": "online", 00:17:42.186 "raid_level": "raid1", 00:17:42.186 "superblock": true, 00:17:42.186 "num_base_bdevs": 2, 00:17:42.186 "num_base_bdevs_discovered": 1, 00:17:42.186 "num_base_bdevs_operational": 1, 00:17:42.186 "base_bdevs_list": [ 00:17:42.186 { 00:17:42.186 "name": null, 00:17:42.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.186 "is_configured": false, 00:17:42.186 "data_offset": 0, 00:17:42.186 "data_size": 7936 00:17:42.186 }, 00:17:42.186 { 00:17:42.186 "name": "BaseBdev2", 00:17:42.186 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:42.186 "is_configured": true, 00:17:42.186 "data_offset": 256, 00:17:42.186 "data_size": 7936 00:17:42.186 } 00:17:42.186 ] 00:17:42.186 }' 00:17:42.186 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.451 [2024-12-12 19:46:25.125081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.451 [2024-12-12 19:46:25.138894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.451 19:46:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.451 [2024-12-12 19:46:25.140760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.404 "name": "raid_bdev1", 00:17:43.404 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:43.404 "strip_size_kb": 0, 00:17:43.404 "state": "online", 00:17:43.404 "raid_level": "raid1", 00:17:43.404 "superblock": true, 00:17:43.404 "num_base_bdevs": 2, 00:17:43.404 "num_base_bdevs_discovered": 2, 00:17:43.404 "num_base_bdevs_operational": 2, 00:17:43.404 "process": { 00:17:43.404 "type": "rebuild", 00:17:43.404 "target": "spare", 00:17:43.404 "progress": { 00:17:43.404 "blocks": 2560, 00:17:43.404 "percent": 32 00:17:43.404 } 00:17:43.404 }, 00:17:43.404 "base_bdevs_list": [ 00:17:43.404 { 00:17:43.404 "name": "spare", 00:17:43.404 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:43.404 "is_configured": true, 00:17:43.404 "data_offset": 256, 00:17:43.404 "data_size": 7936 00:17:43.404 }, 00:17:43.404 { 00:17:43.404 "name": "BaseBdev2", 00:17:43.404 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:43.404 "is_configured": true, 00:17:43.404 "data_offset": 256, 00:17:43.404 "data_size": 7936 00:17:43.404 } 00:17:43.404 ] 00:17:43.404 }' 00:17:43.404 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.664 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=702 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.664 "name": "raid_bdev1", 00:17:43.664 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:43.664 "strip_size_kb": 0, 00:17:43.664 "state": "online", 00:17:43.664 "raid_level": "raid1", 00:17:43.664 "superblock": true, 00:17:43.664 "num_base_bdevs": 2, 00:17:43.664 "num_base_bdevs_discovered": 2, 00:17:43.664 "num_base_bdevs_operational": 2, 00:17:43.664 "process": { 00:17:43.664 "type": "rebuild", 00:17:43.664 "target": "spare", 00:17:43.664 "progress": { 00:17:43.664 "blocks": 2816, 00:17:43.664 "percent": 35 00:17:43.664 } 00:17:43.664 }, 00:17:43.664 "base_bdevs_list": [ 00:17:43.664 { 00:17:43.664 "name": "spare", 00:17:43.664 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:43.664 "is_configured": true, 00:17:43.664 "data_offset": 256, 00:17:43.664 "data_size": 7936 00:17:43.664 }, 00:17:43.664 { 00:17:43.664 "name": "BaseBdev2", 00:17:43.664 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:43.664 "is_configured": true, 00:17:43.664 "data_offset": 256, 00:17:43.664 "data_size": 7936 00:17:43.664 } 00:17:43.664 ] 00:17:43.664 }' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.664 19:46:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.604 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.863 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.863 "name": "raid_bdev1", 00:17:44.863 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:44.864 "strip_size_kb": 0, 00:17:44.864 "state": "online", 00:17:44.864 "raid_level": "raid1", 00:17:44.864 "superblock": true, 00:17:44.864 "num_base_bdevs": 2, 00:17:44.864 "num_base_bdevs_discovered": 2, 00:17:44.864 "num_base_bdevs_operational": 2, 00:17:44.864 "process": { 00:17:44.864 "type": "rebuild", 00:17:44.864 "target": "spare", 00:17:44.864 "progress": { 00:17:44.864 "blocks": 5632, 00:17:44.864 "percent": 70 00:17:44.864 } 00:17:44.864 }, 00:17:44.864 "base_bdevs_list": [ 00:17:44.864 { 00:17:44.864 "name": "spare", 00:17:44.864 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:44.864 "is_configured": true, 00:17:44.864 "data_offset": 256, 00:17:44.864 "data_size": 7936 00:17:44.864 }, 00:17:44.864 { 00:17:44.864 "name": "BaseBdev2", 00:17:44.864 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:44.864 "is_configured": true, 00:17:44.864 "data_offset": 256, 00:17:44.864 "data_size": 7936 00:17:44.864 } 00:17:44.864 ] 00:17:44.864 }' 00:17:44.864 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.864 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.864 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.864 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.864 19:46:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.433 [2024-12-12 19:46:28.254876] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.433 [2024-12-12 19:46:28.254958] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.433 [2024-12-12 19:46:28.255064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.002 "name": "raid_bdev1", 00:17:46.002 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:46.002 "strip_size_kb": 0, 00:17:46.002 "state": "online", 00:17:46.002 "raid_level": "raid1", 00:17:46.002 "superblock": true, 00:17:46.002 "num_base_bdevs": 2, 00:17:46.002 "num_base_bdevs_discovered": 2, 00:17:46.002 "num_base_bdevs_operational": 2, 00:17:46.002 "base_bdevs_list": [ 00:17:46.002 { 00:17:46.002 "name": "spare", 00:17:46.002 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:46.002 "is_configured": true, 00:17:46.002 "data_offset": 256, 00:17:46.002 "data_size": 7936 00:17:46.002 }, 00:17:46.002 { 00:17:46.002 "name": "BaseBdev2", 00:17:46.002 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:46.002 "is_configured": true, 00:17:46.002 "data_offset": 256, 00:17:46.002 "data_size": 7936 00:17:46.002 } 00:17:46.002 ] 00:17:46.002 }' 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.002 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.003 "name": "raid_bdev1", 00:17:46.003 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:46.003 "strip_size_kb": 0, 00:17:46.003 "state": "online", 00:17:46.003 "raid_level": "raid1", 00:17:46.003 "superblock": true, 00:17:46.003 "num_base_bdevs": 2, 00:17:46.003 "num_base_bdevs_discovered": 2, 00:17:46.003 "num_base_bdevs_operational": 2, 00:17:46.003 "base_bdevs_list": [ 00:17:46.003 { 00:17:46.003 "name": "spare", 00:17:46.003 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:46.003 "is_configured": true, 00:17:46.003 "data_offset": 256, 00:17:46.003 "data_size": 7936 00:17:46.003 }, 00:17:46.003 { 00:17:46.003 "name": "BaseBdev2", 00:17:46.003 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:46.003 "is_configured": true, 00:17:46.003 "data_offset": 256, 00:17:46.003 "data_size": 7936 00:17:46.003 } 00:17:46.003 ] 00:17:46.003 }' 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.003 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.263 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.263 "name": "raid_bdev1", 00:17:46.263 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:46.263 "strip_size_kb": 0, 00:17:46.263 "state": "online", 00:17:46.263 "raid_level": "raid1", 00:17:46.263 "superblock": true, 00:17:46.263 "num_base_bdevs": 2, 00:17:46.263 "num_base_bdevs_discovered": 2, 00:17:46.263 "num_base_bdevs_operational": 2, 00:17:46.263 "base_bdevs_list": [ 00:17:46.263 { 00:17:46.263 "name": "spare", 00:17:46.263 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:46.263 "is_configured": true, 00:17:46.263 "data_offset": 256, 00:17:46.263 "data_size": 7936 00:17:46.263 }, 00:17:46.263 { 00:17:46.263 "name": "BaseBdev2", 00:17:46.263 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:46.263 "is_configured": true, 00:17:46.263 "data_offset": 256, 00:17:46.263 "data_size": 7936 00:17:46.263 } 00:17:46.263 ] 00:17:46.263 }' 00:17:46.263 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.263 19:46:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.523 [2024-12-12 19:46:29.277055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.523 [2024-12-12 19:46:29.277147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.523 [2024-12-12 19:46:29.277252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.523 [2024-12-12 19:46:29.277366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.523 [2024-12-12 19:46:29.277421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.523 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:46.783 /dev/nbd0 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.783 1+0 records in 00:17:46.783 1+0 records out 00:17:46.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540976 s, 7.6 MB/s 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.783 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:47.043 /dev/nbd1 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.043 1+0 records in 00:17:47.043 1+0 records out 00:17:47.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451551 s, 9.1 MB/s 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.043 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.303 19:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.562 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.821 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.821 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.821 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 [2024-12-12 19:46:30.419557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.822 [2024-12-12 19:46:30.419617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.822 [2024-12-12 19:46:30.419638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:47.822 [2024-12-12 19:46:30.419647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.822 [2024-12-12 19:46:30.421471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.822 [2024-12-12 19:46:30.421509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.822 [2024-12-12 19:46:30.421577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.822 [2024-12-12 19:46:30.421640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.822 [2024-12-12 19:46:30.421773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.822 spare 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.822 [2024-12-12 19:46:30.521656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:47.822 [2024-12-12 19:46:30.521722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.822 [2024-12-12 19:46:30.521814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:47.822 [2024-12-12 19:46:30.521975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:47.822 [2024-12-12 19:46:30.521986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:47.822 [2024-12-12 19:46:30.522100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.822 "name": "raid_bdev1", 00:17:47.822 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:47.822 "strip_size_kb": 0, 00:17:47.822 "state": "online", 00:17:47.822 "raid_level": "raid1", 00:17:47.822 "superblock": true, 00:17:47.822 "num_base_bdevs": 2, 00:17:47.822 "num_base_bdevs_discovered": 2, 00:17:47.822 "num_base_bdevs_operational": 2, 00:17:47.822 "base_bdevs_list": [ 00:17:47.822 { 00:17:47.822 "name": "spare", 00:17:47.822 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:47.822 "is_configured": true, 00:17:47.822 "data_offset": 256, 00:17:47.822 "data_size": 7936 00:17:47.822 }, 00:17:47.822 { 00:17:47.822 "name": "BaseBdev2", 00:17:47.822 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:47.822 "is_configured": true, 00:17:47.822 "data_offset": 256, 00:17:47.822 "data_size": 7936 00:17:47.822 } 00:17:47.822 ] 00:17:47.822 }' 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.822 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.391 19:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.391 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.391 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.391 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.391 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.391 "name": "raid_bdev1", 00:17:48.391 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:48.391 "strip_size_kb": 0, 00:17:48.391 "state": "online", 00:17:48.391 "raid_level": "raid1", 00:17:48.391 "superblock": true, 00:17:48.391 "num_base_bdevs": 2, 00:17:48.391 "num_base_bdevs_discovered": 2, 00:17:48.391 "num_base_bdevs_operational": 2, 00:17:48.391 "base_bdevs_list": [ 00:17:48.391 { 00:17:48.391 "name": "spare", 00:17:48.391 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:48.391 "is_configured": true, 00:17:48.391 "data_offset": 256, 00:17:48.391 "data_size": 7936 00:17:48.391 }, 00:17:48.391 { 00:17:48.391 "name": "BaseBdev2", 00:17:48.391 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:48.391 "is_configured": true, 00:17:48.392 "data_offset": 256, 00:17:48.392 "data_size": 7936 00:17:48.392 } 00:17:48.392 ] 00:17:48.392 }' 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.392 [2024-12-12 19:46:31.198435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.392 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.651 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.651 "name": "raid_bdev1", 00:17:48.651 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:48.651 "strip_size_kb": 0, 00:17:48.651 "state": "online", 00:17:48.651 "raid_level": "raid1", 00:17:48.651 "superblock": true, 00:17:48.651 "num_base_bdevs": 2, 00:17:48.651 "num_base_bdevs_discovered": 1, 00:17:48.651 "num_base_bdevs_operational": 1, 00:17:48.651 "base_bdevs_list": [ 00:17:48.651 { 00:17:48.651 "name": null, 00:17:48.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.651 "is_configured": false, 00:17:48.651 "data_offset": 0, 00:17:48.652 "data_size": 7936 00:17:48.652 }, 00:17:48.652 { 00:17:48.652 "name": "BaseBdev2", 00:17:48.652 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:48.652 "is_configured": true, 00:17:48.652 "data_offset": 256, 00:17:48.652 "data_size": 7936 00:17:48.652 } 00:17:48.652 ] 00:17:48.652 }' 00:17:48.652 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.652 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.911 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:48.911 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.911 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.911 [2024-12-12 19:46:31.666237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.911 [2024-12-12 19:46:31.666529] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.911 [2024-12-12 19:46:31.666618] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:48.911 [2024-12-12 19:46:31.666710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.911 [2024-12-12 19:46:31.680064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:48.911 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.911 19:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:48.911 [2024-12-12 19:46:31.681970] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.850 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.110 "name": "raid_bdev1", 00:17:50.110 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:50.110 "strip_size_kb": 0, 00:17:50.110 "state": "online", 00:17:50.110 "raid_level": "raid1", 00:17:50.110 "superblock": true, 00:17:50.110 "num_base_bdevs": 2, 00:17:50.110 "num_base_bdevs_discovered": 2, 00:17:50.110 "num_base_bdevs_operational": 2, 00:17:50.110 "process": { 00:17:50.110 "type": "rebuild", 00:17:50.110 "target": "spare", 00:17:50.110 "progress": { 00:17:50.110 "blocks": 2560, 00:17:50.110 "percent": 32 00:17:50.110 } 00:17:50.110 }, 00:17:50.110 "base_bdevs_list": [ 00:17:50.110 { 00:17:50.110 "name": "spare", 00:17:50.110 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:50.110 "is_configured": true, 00:17:50.110 "data_offset": 256, 00:17:50.110 "data_size": 7936 00:17:50.110 }, 00:17:50.110 { 00:17:50.110 "name": "BaseBdev2", 00:17:50.110 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:50.110 "is_configured": true, 00:17:50.110 "data_offset": 256, 00:17:50.110 "data_size": 7936 00:17:50.110 } 00:17:50.110 ] 00:17:50.110 }' 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.110 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 [2024-12-12 19:46:32.850653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.110 [2024-12-12 19:46:32.887693] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:50.110 [2024-12-12 19:46:32.887777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.110 [2024-12-12 19:46:32.887791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.111 [2024-12-12 19:46:32.887810] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.111 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.370 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.370 "name": "raid_bdev1", 00:17:50.370 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:50.370 "strip_size_kb": 0, 00:17:50.370 "state": "online", 00:17:50.370 "raid_level": "raid1", 00:17:50.370 "superblock": true, 00:17:50.370 "num_base_bdevs": 2, 00:17:50.370 "num_base_bdevs_discovered": 1, 00:17:50.370 "num_base_bdevs_operational": 1, 00:17:50.370 "base_bdevs_list": [ 00:17:50.370 { 00:17:50.370 "name": null, 00:17:50.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.370 "is_configured": false, 00:17:50.370 "data_offset": 0, 00:17:50.370 "data_size": 7936 00:17:50.370 }, 00:17:50.370 { 00:17:50.370 "name": "BaseBdev2", 00:17:50.370 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:50.370 "is_configured": true, 00:17:50.370 "data_offset": 256, 00:17:50.370 "data_size": 7936 00:17:50.370 } 00:17:50.370 ] 00:17:50.370 }' 00:17:50.370 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.370 19:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.630 19:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.630 19:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.630 19:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.630 [2024-12-12 19:46:33.347309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.630 [2024-12-12 19:46:33.347452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.630 [2024-12-12 19:46:33.347498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:50.630 [2024-12-12 19:46:33.347528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.630 [2024-12-12 19:46:33.347858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.630 [2024-12-12 19:46:33.347917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.630 [2024-12-12 19:46:33.348026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:50.630 [2024-12-12 19:46:33.348068] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.630 [2024-12-12 19:46:33.348115] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.630 [2024-12-12 19:46:33.348172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.630 [2024-12-12 19:46:33.361894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:50.630 spare 00:17:50.630 19:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.630 [2024-12-12 19:46:33.363833] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.630 19:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.569 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.828 "name": "raid_bdev1", 00:17:51.828 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:51.828 "strip_size_kb": 0, 00:17:51.828 "state": "online", 00:17:51.828 "raid_level": "raid1", 00:17:51.828 "superblock": true, 00:17:51.828 "num_base_bdevs": 2, 00:17:51.828 "num_base_bdevs_discovered": 2, 00:17:51.828 "num_base_bdevs_operational": 2, 00:17:51.828 "process": { 00:17:51.828 "type": "rebuild", 00:17:51.828 "target": "spare", 00:17:51.828 "progress": { 00:17:51.828 "blocks": 2560, 00:17:51.828 "percent": 32 00:17:51.828 } 00:17:51.828 }, 00:17:51.828 "base_bdevs_list": [ 00:17:51.828 { 00:17:51.828 "name": "spare", 00:17:51.828 "uuid": "09554044-0dcf-583d-aac2-1bd42611e067", 00:17:51.828 "is_configured": true, 00:17:51.828 "data_offset": 256, 00:17:51.828 "data_size": 7936 00:17:51.828 }, 00:17:51.828 { 00:17:51.828 "name": "BaseBdev2", 00:17:51.828 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:51.828 "is_configured": true, 00:17:51.828 "data_offset": 256, 00:17:51.828 "data_size": 7936 00:17:51.828 } 00:17:51.828 ] 00:17:51.828 }' 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.828 [2024-12-12 19:46:34.507629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.828 [2024-12-12 19:46:34.569061] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.828 [2024-12-12 19:46:34.569154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.828 [2024-12-12 19:46:34.569191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.828 [2024-12-12 19:46:34.569197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.828 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.829 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.829 "name": "raid_bdev1", 00:17:51.829 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:51.829 "strip_size_kb": 0, 00:17:51.829 "state": "online", 00:17:51.829 "raid_level": "raid1", 00:17:51.829 "superblock": true, 00:17:51.829 "num_base_bdevs": 2, 00:17:51.829 "num_base_bdevs_discovered": 1, 00:17:51.829 "num_base_bdevs_operational": 1, 00:17:51.829 "base_bdevs_list": [ 00:17:51.829 { 00:17:51.829 "name": null, 00:17:51.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.829 "is_configured": false, 00:17:51.829 "data_offset": 0, 00:17:51.829 "data_size": 7936 00:17:51.829 }, 00:17:51.829 { 00:17:51.829 "name": "BaseBdev2", 00:17:51.829 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:51.829 "is_configured": true, 00:17:51.829 "data_offset": 256, 00:17:51.829 "data_size": 7936 00:17:51.829 } 00:17:51.829 ] 00:17:51.829 }' 00:17:51.829 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.829 19:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.404 "name": "raid_bdev1", 00:17:52.404 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:52.404 "strip_size_kb": 0, 00:17:52.404 "state": "online", 00:17:52.404 "raid_level": "raid1", 00:17:52.404 "superblock": true, 00:17:52.404 "num_base_bdevs": 2, 00:17:52.404 "num_base_bdevs_discovered": 1, 00:17:52.404 "num_base_bdevs_operational": 1, 00:17:52.404 "base_bdevs_list": [ 00:17:52.404 { 00:17:52.404 "name": null, 00:17:52.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.404 "is_configured": false, 00:17:52.404 "data_offset": 0, 00:17:52.404 "data_size": 7936 00:17:52.404 }, 00:17:52.404 { 00:17:52.404 "name": "BaseBdev2", 00:17:52.404 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:52.404 "is_configured": true, 00:17:52.404 "data_offset": 256, 00:17:52.404 "data_size": 7936 00:17:52.404 } 00:17:52.404 ] 00:17:52.404 }' 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 [2024-12-12 19:46:35.203397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:52.404 [2024-12-12 19:46:35.203450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.404 [2024-12-12 19:46:35.203482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:52.404 [2024-12-12 19:46:35.203491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.404 [2024-12-12 19:46:35.203709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.404 [2024-12-12 19:46:35.203725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:52.404 [2024-12-12 19:46:35.203774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:52.404 [2024-12-12 19:46:35.203786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.404 [2024-12-12 19:46:35.203795] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:52.404 [2024-12-12 19:46:35.203804] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:52.404 BaseBdev1 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.404 19:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.787 "name": "raid_bdev1", 00:17:53.787 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:53.787 "strip_size_kb": 0, 00:17:53.787 "state": "online", 00:17:53.787 "raid_level": "raid1", 00:17:53.787 "superblock": true, 00:17:53.787 "num_base_bdevs": 2, 00:17:53.787 "num_base_bdevs_discovered": 1, 00:17:53.787 "num_base_bdevs_operational": 1, 00:17:53.787 "base_bdevs_list": [ 00:17:53.787 { 00:17:53.787 "name": null, 00:17:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.787 "is_configured": false, 00:17:53.787 "data_offset": 0, 00:17:53.787 "data_size": 7936 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev2", 00:17:53.787 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:53.787 "is_configured": true, 00:17:53.787 "data_offset": 256, 00:17:53.787 "data_size": 7936 00:17:53.787 } 00:17:53.787 ] 00:17:53.787 }' 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.787 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.047 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.047 "name": "raid_bdev1", 00:17:54.047 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:54.047 "strip_size_kb": 0, 00:17:54.047 "state": "online", 00:17:54.047 "raid_level": "raid1", 00:17:54.047 "superblock": true, 00:17:54.047 "num_base_bdevs": 2, 00:17:54.047 "num_base_bdevs_discovered": 1, 00:17:54.047 "num_base_bdevs_operational": 1, 00:17:54.047 "base_bdevs_list": [ 00:17:54.047 { 00:17:54.047 "name": null, 00:17:54.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.047 "is_configured": false, 00:17:54.047 "data_offset": 0, 00:17:54.048 "data_size": 7936 00:17:54.048 }, 00:17:54.048 { 00:17:54.048 "name": "BaseBdev2", 00:17:54.048 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:54.048 "is_configured": true, 00:17:54.048 "data_offset": 256, 00:17:54.048 "data_size": 7936 00:17:54.048 } 00:17:54.048 ] 00:17:54.048 }' 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.048 [2024-12-12 19:46:36.756985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.048 [2024-12-12 19:46:36.757184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:54.048 [2024-12-12 19:46:36.757204] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:54.048 request: 00:17:54.048 { 00:17:54.048 "base_bdev": "BaseBdev1", 00:17:54.048 "raid_bdev": "raid_bdev1", 00:17:54.048 "method": "bdev_raid_add_base_bdev", 00:17:54.048 "req_id": 1 00:17:54.048 } 00:17:54.048 Got JSON-RPC error response 00:17:54.048 response: 00:17:54.048 { 00:17:54.048 "code": -22, 00:17:54.048 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:54.048 } 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.048 19:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.985 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.986 "name": "raid_bdev1", 00:17:54.986 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:54.986 "strip_size_kb": 0, 00:17:54.986 "state": "online", 00:17:54.986 "raid_level": "raid1", 00:17:54.986 "superblock": true, 00:17:54.986 "num_base_bdevs": 2, 00:17:54.986 "num_base_bdevs_discovered": 1, 00:17:54.986 "num_base_bdevs_operational": 1, 00:17:54.986 "base_bdevs_list": [ 00:17:54.986 { 00:17:54.986 "name": null, 00:17:54.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.986 "is_configured": false, 00:17:54.986 "data_offset": 0, 00:17:54.986 "data_size": 7936 00:17:54.986 }, 00:17:54.986 { 00:17:54.986 "name": "BaseBdev2", 00:17:54.986 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:54.986 "is_configured": true, 00:17:54.986 "data_offset": 256, 00:17:54.986 "data_size": 7936 00:17:54.986 } 00:17:54.986 ] 00:17:54.986 }' 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.986 19:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.554 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.554 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.555 "name": "raid_bdev1", 00:17:55.555 "uuid": "caca4b2b-a56b-490e-82c5-d38f4b0554a2", 00:17:55.555 "strip_size_kb": 0, 00:17:55.555 "state": "online", 00:17:55.555 "raid_level": "raid1", 00:17:55.555 "superblock": true, 00:17:55.555 "num_base_bdevs": 2, 00:17:55.555 "num_base_bdevs_discovered": 1, 00:17:55.555 "num_base_bdevs_operational": 1, 00:17:55.555 "base_bdevs_list": [ 00:17:55.555 { 00:17:55.555 "name": null, 00:17:55.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.555 "is_configured": false, 00:17:55.555 "data_offset": 0, 00:17:55.555 "data_size": 7936 00:17:55.555 }, 00:17:55.555 { 00:17:55.555 "name": "BaseBdev2", 00:17:55.555 "uuid": "51971008-921d-598d-a78b-8a7b3cd8ff03", 00:17:55.555 "is_configured": true, 00:17:55.555 "data_offset": 256, 00:17:55.555 "data_size": 7936 00:17:55.555 } 00:17:55.555 ] 00:17:55.555 }' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 89405 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89405 ']' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 89405 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89405 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.555 killing process with pid 89405 00:17:55.555 Received shutdown signal, test time was about 60.000000 seconds 00:17:55.555 00:17:55.555 Latency(us) 00:17:55.555 [2024-12-12T19:46:38.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.555 [2024-12-12T19:46:38.400Z] =================================================================================================================== 00:17:55.555 [2024-12-12T19:46:38.400Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89405' 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 89405 00:17:55.555 [2024-12-12 19:46:38.366122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.555 [2024-12-12 19:46:38.366248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.555 [2024-12-12 19:46:38.366311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.555 19:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 89405 00:17:55.555 [2024-12-12 19:46:38.366325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:56.124 [2024-12-12 19:46:38.669875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.063 ************************************ 00:17:57.063 END TEST raid_rebuild_test_sb_md_separate 00:17:57.063 ************************************ 00:17:57.063 19:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:57.063 00:17:57.063 real 0m19.508s 00:17:57.063 user 0m25.413s 00:17:57.063 sys 0m2.575s 00:17:57.063 19:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.063 19:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.063 19:46:39 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:57.063 19:46:39 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:57.063 19:46:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:57.063 19:46:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.063 19:46:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.063 ************************************ 00:17:57.063 START TEST raid_state_function_test_sb_md_interleaved 00:17:57.063 ************************************ 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:57.063 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=90095 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90095' 00:17:57.064 Process raid pid: 90095 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 90095 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90095 ']' 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.064 19:46:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.064 [2024-12-12 19:46:39.876280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:57.064 [2024-12-12 19:46:39.876401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.323 [2024-12-12 19:46:40.056448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.323 [2024-12-12 19:46:40.163001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.583 [2024-12-12 19:46:40.343390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.583 [2024-12-12 19:46:40.343424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.153 [2024-12-12 19:46:40.709490] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.153 [2024-12-12 19:46:40.709603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.153 [2024-12-12 19:46:40.709635] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.153 [2024-12-12 19:46:40.709658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.153 "name": "Existed_Raid", 00:17:58.153 "uuid": "8c77a35e-9750-4b38-90c9-7a3921f2bcf5", 00:17:58.153 "strip_size_kb": 0, 00:17:58.153 "state": "configuring", 00:17:58.153 "raid_level": "raid1", 00:17:58.153 "superblock": true, 00:17:58.153 "num_base_bdevs": 2, 00:17:58.153 "num_base_bdevs_discovered": 0, 00:17:58.153 "num_base_bdevs_operational": 2, 00:17:58.153 "base_bdevs_list": [ 00:17:58.153 { 00:17:58.153 "name": "BaseBdev1", 00:17:58.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.153 "is_configured": false, 00:17:58.153 "data_offset": 0, 00:17:58.153 "data_size": 0 00:17:58.153 }, 00:17:58.153 { 00:17:58.153 "name": "BaseBdev2", 00:17:58.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.153 "is_configured": false, 00:17:58.153 "data_offset": 0, 00:17:58.153 "data_size": 0 00:17:58.153 } 00:17:58.153 ] 00:17:58.153 }' 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.153 19:46:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 [2024-12-12 19:46:41.184603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.413 [2024-12-12 19:46:41.184669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 [2024-12-12 19:46:41.196588] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.413 [2024-12-12 19:46:41.196633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.413 [2024-12-12 19:46:41.196642] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.413 [2024-12-12 19:46:41.196653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 [2024-12-12 19:46:41.241731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.413 BaseBdev1 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.672 [ 00:17:58.672 { 00:17:58.672 "name": "BaseBdev1", 00:17:58.672 "aliases": [ 00:17:58.672 "3bcb0fbb-875a-4076-9cf0-c9be9617724b" 00:17:58.672 ], 00:17:58.672 "product_name": "Malloc disk", 00:17:58.672 "block_size": 4128, 00:17:58.672 "num_blocks": 8192, 00:17:58.672 "uuid": "3bcb0fbb-875a-4076-9cf0-c9be9617724b", 00:17:58.672 "md_size": 32, 00:17:58.672 "md_interleave": true, 00:17:58.672 "dif_type": 0, 00:17:58.672 "assigned_rate_limits": { 00:17:58.672 "rw_ios_per_sec": 0, 00:17:58.672 "rw_mbytes_per_sec": 0, 00:17:58.672 "r_mbytes_per_sec": 0, 00:17:58.672 "w_mbytes_per_sec": 0 00:17:58.672 }, 00:17:58.672 "claimed": true, 00:17:58.672 "claim_type": "exclusive_write", 00:17:58.672 "zoned": false, 00:17:58.672 "supported_io_types": { 00:17:58.672 "read": true, 00:17:58.672 "write": true, 00:17:58.672 "unmap": true, 00:17:58.672 "flush": true, 00:17:58.672 "reset": true, 00:17:58.672 "nvme_admin": false, 00:17:58.672 "nvme_io": false, 00:17:58.672 "nvme_io_md": false, 00:17:58.672 "write_zeroes": true, 00:17:58.672 "zcopy": true, 00:17:58.672 "get_zone_info": false, 00:17:58.672 "zone_management": false, 00:17:58.672 "zone_append": false, 00:17:58.672 "compare": false, 00:17:58.672 "compare_and_write": false, 00:17:58.672 "abort": true, 00:17:58.672 "seek_hole": false, 00:17:58.672 "seek_data": false, 00:17:58.672 "copy": true, 00:17:58.672 "nvme_iov_md": false 00:17:58.672 }, 00:17:58.672 "memory_domains": [ 00:17:58.672 { 00:17:58.672 "dma_device_id": "system", 00:17:58.672 "dma_device_type": 1 00:17:58.672 }, 00:17:58.672 { 00:17:58.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.672 "dma_device_type": 2 00:17:58.672 } 00:17:58.672 ], 00:17:58.672 "driver_specific": {} 00:17:58.672 } 00:17:58.672 ] 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.672 "name": "Existed_Raid", 00:17:58.672 "uuid": "ee30e61d-068b-4977-9046-974311df1a04", 00:17:58.672 "strip_size_kb": 0, 00:17:58.672 "state": "configuring", 00:17:58.672 "raid_level": "raid1", 00:17:58.672 "superblock": true, 00:17:58.672 "num_base_bdevs": 2, 00:17:58.672 "num_base_bdevs_discovered": 1, 00:17:58.672 "num_base_bdevs_operational": 2, 00:17:58.672 "base_bdevs_list": [ 00:17:58.672 { 00:17:58.672 "name": "BaseBdev1", 00:17:58.672 "uuid": "3bcb0fbb-875a-4076-9cf0-c9be9617724b", 00:17:58.672 "is_configured": true, 00:17:58.672 "data_offset": 256, 00:17:58.672 "data_size": 7936 00:17:58.672 }, 00:17:58.672 { 00:17:58.672 "name": "BaseBdev2", 00:17:58.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.672 "is_configured": false, 00:17:58.672 "data_offset": 0, 00:17:58.672 "data_size": 0 00:17:58.672 } 00:17:58.672 ] 00:17:58.672 }' 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.672 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 [2024-12-12 19:46:41.764876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.932 [2024-12-12 19:46:41.764951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.932 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.191 [2024-12-12 19:46:41.776910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.191 [2024-12-12 19:46:41.778825] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.191 [2024-12-12 19:46:41.778865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.191 "name": "Existed_Raid", 00:17:59.191 "uuid": "d6c3df0f-ea41-46f1-be7e-26d36b7e6421", 00:17:59.191 "strip_size_kb": 0, 00:17:59.191 "state": "configuring", 00:17:59.191 "raid_level": "raid1", 00:17:59.191 "superblock": true, 00:17:59.191 "num_base_bdevs": 2, 00:17:59.191 "num_base_bdevs_discovered": 1, 00:17:59.191 "num_base_bdevs_operational": 2, 00:17:59.191 "base_bdevs_list": [ 00:17:59.191 { 00:17:59.191 "name": "BaseBdev1", 00:17:59.191 "uuid": "3bcb0fbb-875a-4076-9cf0-c9be9617724b", 00:17:59.191 "is_configured": true, 00:17:59.191 "data_offset": 256, 00:17:59.191 "data_size": 7936 00:17:59.191 }, 00:17:59.191 { 00:17:59.191 "name": "BaseBdev2", 00:17:59.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.191 "is_configured": false, 00:17:59.191 "data_offset": 0, 00:17:59.191 "data_size": 0 00:17:59.191 } 00:17:59.191 ] 00:17:59.191 }' 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.191 19:46:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.451 [2024-12-12 19:46:42.265470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.451 [2024-12-12 19:46:42.265824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:59.451 [2024-12-12 19:46:42.265877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.451 [2024-12-12 19:46:42.266005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.451 [2024-12-12 19:46:42.266123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:59.451 [2024-12-12 19:46:42.266163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:59.451 [2024-12-12 19:46:42.266302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.451 BaseBdev2 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.451 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.451 [ 00:17:59.451 { 00:17:59.451 "name": "BaseBdev2", 00:17:59.451 "aliases": [ 00:17:59.711 "d0d3b7b0-e1ca-40be-9d9a-44701f9837e6" 00:17:59.711 ], 00:17:59.712 "product_name": "Malloc disk", 00:17:59.712 "block_size": 4128, 00:17:59.712 "num_blocks": 8192, 00:17:59.712 "uuid": "d0d3b7b0-e1ca-40be-9d9a-44701f9837e6", 00:17:59.712 "md_size": 32, 00:17:59.712 "md_interleave": true, 00:17:59.712 "dif_type": 0, 00:17:59.712 "assigned_rate_limits": { 00:17:59.712 "rw_ios_per_sec": 0, 00:17:59.712 "rw_mbytes_per_sec": 0, 00:17:59.712 "r_mbytes_per_sec": 0, 00:17:59.712 "w_mbytes_per_sec": 0 00:17:59.712 }, 00:17:59.712 "claimed": true, 00:17:59.712 "claim_type": "exclusive_write", 00:17:59.712 "zoned": false, 00:17:59.712 "supported_io_types": { 00:17:59.712 "read": true, 00:17:59.712 "write": true, 00:17:59.712 "unmap": true, 00:17:59.712 "flush": true, 00:17:59.712 "reset": true, 00:17:59.712 "nvme_admin": false, 00:17:59.712 "nvme_io": false, 00:17:59.712 "nvme_io_md": false, 00:17:59.712 "write_zeroes": true, 00:17:59.712 "zcopy": true, 00:17:59.712 "get_zone_info": false, 00:17:59.712 "zone_management": false, 00:17:59.712 "zone_append": false, 00:17:59.712 "compare": false, 00:17:59.712 "compare_and_write": false, 00:17:59.712 "abort": true, 00:17:59.712 "seek_hole": false, 00:17:59.712 "seek_data": false, 00:17:59.712 "copy": true, 00:17:59.712 "nvme_iov_md": false 00:17:59.712 }, 00:17:59.712 "memory_domains": [ 00:17:59.712 { 00:17:59.712 "dma_device_id": "system", 00:17:59.712 "dma_device_type": 1 00:17:59.712 }, 00:17:59.712 { 00:17:59.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.712 "dma_device_type": 2 00:17:59.712 } 00:17:59.712 ], 00:17:59.712 "driver_specific": {} 00:17:59.712 } 00:17:59.712 ] 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.712 "name": "Existed_Raid", 00:17:59.712 "uuid": "d6c3df0f-ea41-46f1-be7e-26d36b7e6421", 00:17:59.712 "strip_size_kb": 0, 00:17:59.712 "state": "online", 00:17:59.712 "raid_level": "raid1", 00:17:59.712 "superblock": true, 00:17:59.712 "num_base_bdevs": 2, 00:17:59.712 "num_base_bdevs_discovered": 2, 00:17:59.712 "num_base_bdevs_operational": 2, 00:17:59.712 "base_bdevs_list": [ 00:17:59.712 { 00:17:59.712 "name": "BaseBdev1", 00:17:59.712 "uuid": "3bcb0fbb-875a-4076-9cf0-c9be9617724b", 00:17:59.712 "is_configured": true, 00:17:59.712 "data_offset": 256, 00:17:59.712 "data_size": 7936 00:17:59.712 }, 00:17:59.712 { 00:17:59.712 "name": "BaseBdev2", 00:17:59.712 "uuid": "d0d3b7b0-e1ca-40be-9d9a-44701f9837e6", 00:17:59.712 "is_configured": true, 00:17:59.712 "data_offset": 256, 00:17:59.712 "data_size": 7936 00:17:59.712 } 00:17:59.712 ] 00:17:59.712 }' 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.712 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.003 [2024-12-12 19:46:42.748976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.003 "name": "Existed_Raid", 00:18:00.003 "aliases": [ 00:18:00.003 "d6c3df0f-ea41-46f1-be7e-26d36b7e6421" 00:18:00.003 ], 00:18:00.003 "product_name": "Raid Volume", 00:18:00.003 "block_size": 4128, 00:18:00.003 "num_blocks": 7936, 00:18:00.003 "uuid": "d6c3df0f-ea41-46f1-be7e-26d36b7e6421", 00:18:00.003 "md_size": 32, 00:18:00.003 "md_interleave": true, 00:18:00.003 "dif_type": 0, 00:18:00.003 "assigned_rate_limits": { 00:18:00.003 "rw_ios_per_sec": 0, 00:18:00.003 "rw_mbytes_per_sec": 0, 00:18:00.003 "r_mbytes_per_sec": 0, 00:18:00.003 "w_mbytes_per_sec": 0 00:18:00.003 }, 00:18:00.003 "claimed": false, 00:18:00.003 "zoned": false, 00:18:00.003 "supported_io_types": { 00:18:00.003 "read": true, 00:18:00.003 "write": true, 00:18:00.003 "unmap": false, 00:18:00.003 "flush": false, 00:18:00.003 "reset": true, 00:18:00.003 "nvme_admin": false, 00:18:00.003 "nvme_io": false, 00:18:00.003 "nvme_io_md": false, 00:18:00.003 "write_zeroes": true, 00:18:00.003 "zcopy": false, 00:18:00.003 "get_zone_info": false, 00:18:00.003 "zone_management": false, 00:18:00.003 "zone_append": false, 00:18:00.003 "compare": false, 00:18:00.003 "compare_and_write": false, 00:18:00.003 "abort": false, 00:18:00.003 "seek_hole": false, 00:18:00.003 "seek_data": false, 00:18:00.003 "copy": false, 00:18:00.003 "nvme_iov_md": false 00:18:00.003 }, 00:18:00.003 "memory_domains": [ 00:18:00.003 { 00:18:00.003 "dma_device_id": "system", 00:18:00.003 "dma_device_type": 1 00:18:00.003 }, 00:18:00.003 { 00:18:00.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.003 "dma_device_type": 2 00:18:00.003 }, 00:18:00.003 { 00:18:00.003 "dma_device_id": "system", 00:18:00.003 "dma_device_type": 1 00:18:00.003 }, 00:18:00.003 { 00:18:00.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.003 "dma_device_type": 2 00:18:00.003 } 00:18:00.003 ], 00:18:00.003 "driver_specific": { 00:18:00.003 "raid": { 00:18:00.003 "uuid": "d6c3df0f-ea41-46f1-be7e-26d36b7e6421", 00:18:00.003 "strip_size_kb": 0, 00:18:00.003 "state": "online", 00:18:00.003 "raid_level": "raid1", 00:18:00.003 "superblock": true, 00:18:00.003 "num_base_bdevs": 2, 00:18:00.003 "num_base_bdevs_discovered": 2, 00:18:00.003 "num_base_bdevs_operational": 2, 00:18:00.003 "base_bdevs_list": [ 00:18:00.003 { 00:18:00.003 "name": "BaseBdev1", 00:18:00.003 "uuid": "3bcb0fbb-875a-4076-9cf0-c9be9617724b", 00:18:00.003 "is_configured": true, 00:18:00.003 "data_offset": 256, 00:18:00.003 "data_size": 7936 00:18:00.003 }, 00:18:00.003 { 00:18:00.003 "name": "BaseBdev2", 00:18:00.003 "uuid": "d0d3b7b0-e1ca-40be-9d9a-44701f9837e6", 00:18:00.003 "is_configured": true, 00:18:00.003 "data_offset": 256, 00:18:00.003 "data_size": 7936 00:18:00.003 } 00:18:00.003 ] 00:18:00.003 } 00:18:00.003 } 00:18:00.003 }' 00:18:00.003 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:00.276 BaseBdev2' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.276 19:46:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 [2024-12-12 19:46:42.976353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.276 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.276 "name": "Existed_Raid", 00:18:00.276 "uuid": "d6c3df0f-ea41-46f1-be7e-26d36b7e6421", 00:18:00.276 "strip_size_kb": 0, 00:18:00.276 "state": "online", 00:18:00.276 "raid_level": "raid1", 00:18:00.276 "superblock": true, 00:18:00.276 "num_base_bdevs": 2, 00:18:00.276 "num_base_bdevs_discovered": 1, 00:18:00.276 "num_base_bdevs_operational": 1, 00:18:00.276 "base_bdevs_list": [ 00:18:00.276 { 00:18:00.276 "name": null, 00:18:00.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.276 "is_configured": false, 00:18:00.276 "data_offset": 0, 00:18:00.276 "data_size": 7936 00:18:00.276 }, 00:18:00.276 { 00:18:00.276 "name": "BaseBdev2", 00:18:00.276 "uuid": "d0d3b7b0-e1ca-40be-9d9a-44701f9837e6", 00:18:00.277 "is_configured": true, 00:18:00.277 "data_offset": 256, 00:18:00.277 "data_size": 7936 00:18:00.277 } 00:18:00.277 ] 00:18:00.277 }' 00:18:00.277 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.277 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.845 [2024-12-12 19:46:43.546458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.845 [2024-12-12 19:46:43.546579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.845 [2024-12-12 19:46:43.635870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.845 [2024-12-12 19:46:43.635923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.845 [2024-12-12 19:46:43.635934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 90095 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90095 ']' 00:18:00.845 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90095 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90095 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.105 killing process with pid 90095 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90095' 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90095 00:18:01.105 [2024-12-12 19:46:43.732120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.105 19:46:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90095 00:18:01.105 [2024-12-12 19:46:43.747831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.046 19:46:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:02.046 00:18:02.046 real 0m5.025s 00:18:02.046 user 0m7.260s 00:18:02.046 sys 0m0.894s 00:18:02.046 19:46:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.046 19:46:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.046 ************************************ 00:18:02.046 END TEST raid_state_function_test_sb_md_interleaved 00:18:02.046 ************************************ 00:18:02.046 19:46:44 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:02.046 19:46:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:02.046 19:46:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.046 19:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.046 ************************************ 00:18:02.046 START TEST raid_superblock_test_md_interleaved 00:18:02.046 ************************************ 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:02.046 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=90339 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 90339 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90339 ']' 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.306 19:46:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 [2024-12-12 19:46:44.978078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:02.306 [2024-12-12 19:46:44.978196] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90339 ] 00:18:02.566 [2024-12-12 19:46:45.150620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.566 [2024-12-12 19:46:45.257551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.825 [2024-12-12 19:46:45.451468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.825 [2024-12-12 19:46:45.451499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.085 malloc1 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.085 [2024-12-12 19:46:45.834913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.085 [2024-12-12 19:46:45.835008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.085 [2024-12-12 19:46:45.835044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.085 [2024-12-12 19:46:45.835071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.085 [2024-12-12 19:46:45.836877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.085 [2024-12-12 19:46:45.836946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.085 pt1 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:03.085 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 malloc2 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 [2024-12-12 19:46:45.892448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.086 [2024-12-12 19:46:45.892500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.086 [2024-12-12 19:46:45.892520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.086 [2024-12-12 19:46:45.892528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.086 [2024-12-12 19:46:45.894271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.086 [2024-12-12 19:46:45.894315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.086 pt2 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 [2024-12-12 19:46:45.904458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.086 [2024-12-12 19:46:45.906159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.086 [2024-12-12 19:46:45.906438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:03.086 [2024-12-12 19:46:45.906458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:03.086 [2024-12-12 19:46:45.906528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:03.086 [2024-12-12 19:46:45.906614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:03.086 [2024-12-12 19:46:45.906627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:03.086 [2024-12-12 19:46:45.906691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.345 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.345 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.345 "name": "raid_bdev1", 00:18:03.345 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:03.345 "strip_size_kb": 0, 00:18:03.345 "state": "online", 00:18:03.345 "raid_level": "raid1", 00:18:03.345 "superblock": true, 00:18:03.345 "num_base_bdevs": 2, 00:18:03.345 "num_base_bdevs_discovered": 2, 00:18:03.345 "num_base_bdevs_operational": 2, 00:18:03.345 "base_bdevs_list": [ 00:18:03.345 { 00:18:03.345 "name": "pt1", 00:18:03.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.345 "is_configured": true, 00:18:03.345 "data_offset": 256, 00:18:03.345 "data_size": 7936 00:18:03.345 }, 00:18:03.345 { 00:18:03.345 "name": "pt2", 00:18:03.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.345 "is_configured": true, 00:18:03.345 "data_offset": 256, 00:18:03.345 "data_size": 7936 00:18:03.345 } 00:18:03.345 ] 00:18:03.345 }' 00:18:03.345 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.345 19:46:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.602 [2024-12-12 19:46:46.375903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.602 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.602 "name": "raid_bdev1", 00:18:03.602 "aliases": [ 00:18:03.602 "bc958f33-96bb-4fc6-b6eb-c1cef199e837" 00:18:03.602 ], 00:18:03.602 "product_name": "Raid Volume", 00:18:03.602 "block_size": 4128, 00:18:03.602 "num_blocks": 7936, 00:18:03.602 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:03.603 "md_size": 32, 00:18:03.603 "md_interleave": true, 00:18:03.603 "dif_type": 0, 00:18:03.603 "assigned_rate_limits": { 00:18:03.603 "rw_ios_per_sec": 0, 00:18:03.603 "rw_mbytes_per_sec": 0, 00:18:03.603 "r_mbytes_per_sec": 0, 00:18:03.603 "w_mbytes_per_sec": 0 00:18:03.603 }, 00:18:03.603 "claimed": false, 00:18:03.603 "zoned": false, 00:18:03.603 "supported_io_types": { 00:18:03.603 "read": true, 00:18:03.603 "write": true, 00:18:03.603 "unmap": false, 00:18:03.603 "flush": false, 00:18:03.603 "reset": true, 00:18:03.603 "nvme_admin": false, 00:18:03.603 "nvme_io": false, 00:18:03.603 "nvme_io_md": false, 00:18:03.603 "write_zeroes": true, 00:18:03.603 "zcopy": false, 00:18:03.603 "get_zone_info": false, 00:18:03.603 "zone_management": false, 00:18:03.603 "zone_append": false, 00:18:03.603 "compare": false, 00:18:03.603 "compare_and_write": false, 00:18:03.603 "abort": false, 00:18:03.603 "seek_hole": false, 00:18:03.603 "seek_data": false, 00:18:03.603 "copy": false, 00:18:03.603 "nvme_iov_md": false 00:18:03.603 }, 00:18:03.603 "memory_domains": [ 00:18:03.603 { 00:18:03.603 "dma_device_id": "system", 00:18:03.603 "dma_device_type": 1 00:18:03.603 }, 00:18:03.603 { 00:18:03.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.603 "dma_device_type": 2 00:18:03.603 }, 00:18:03.603 { 00:18:03.603 "dma_device_id": "system", 00:18:03.603 "dma_device_type": 1 00:18:03.603 }, 00:18:03.603 { 00:18:03.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.603 "dma_device_type": 2 00:18:03.603 } 00:18:03.603 ], 00:18:03.603 "driver_specific": { 00:18:03.603 "raid": { 00:18:03.603 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:03.603 "strip_size_kb": 0, 00:18:03.603 "state": "online", 00:18:03.603 "raid_level": "raid1", 00:18:03.603 "superblock": true, 00:18:03.603 "num_base_bdevs": 2, 00:18:03.603 "num_base_bdevs_discovered": 2, 00:18:03.603 "num_base_bdevs_operational": 2, 00:18:03.603 "base_bdevs_list": [ 00:18:03.603 { 00:18:03.603 "name": "pt1", 00:18:03.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.603 "is_configured": true, 00:18:03.603 "data_offset": 256, 00:18:03.603 "data_size": 7936 00:18:03.603 }, 00:18:03.603 { 00:18:03.603 "name": "pt2", 00:18:03.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.603 "is_configured": true, 00:18:03.603 "data_offset": 256, 00:18:03.603 "data_size": 7936 00:18:03.603 } 00:18:03.603 ] 00:18:03.603 } 00:18:03.603 } 00:18:03.603 }' 00:18:03.603 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:03.861 pt2' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:03.861 [2024-12-12 19:46:46.571581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc958f33-96bb-4fc6-b6eb-c1cef199e837 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z bc958f33-96bb-4fc6-b6eb-c1cef199e837 ']' 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.861 [2024-12-12 19:46:46.619238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.861 [2024-12-12 19:46:46.619260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.861 [2024-12-12 19:46:46.619327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.861 [2024-12-12 19:46:46.619373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.861 [2024-12-12 19:46:46.619384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.861 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.862 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.121 [2024-12-12 19:46:46.759015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:04.121 [2024-12-12 19:46:46.760896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:04.121 [2024-12-12 19:46:46.761018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:04.121 [2024-12-12 19:46:46.761104] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:04.121 [2024-12-12 19:46:46.761153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.121 [2024-12-12 19:46:46.761203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:04.121 request: 00:18:04.121 { 00:18:04.121 "name": "raid_bdev1", 00:18:04.121 "raid_level": "raid1", 00:18:04.121 "base_bdevs": [ 00:18:04.121 "malloc1", 00:18:04.121 "malloc2" 00:18:04.121 ], 00:18:04.121 "superblock": false, 00:18:04.121 "method": "bdev_raid_create", 00:18:04.121 "req_id": 1 00:18:04.121 } 00:18:04.121 Got JSON-RPC error response 00:18:04.121 response: 00:18:04.121 { 00:18:04.121 "code": -17, 00:18:04.121 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:04.121 } 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.121 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.121 [2024-12-12 19:46:46.810904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.121 [2024-12-12 19:46:46.811005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.121 [2024-12-12 19:46:46.811034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:04.121 [2024-12-12 19:46:46.811062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.121 [2024-12-12 19:46:46.812917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.122 [2024-12-12 19:46:46.812998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.122 [2024-12-12 19:46:46.813058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.122 [2024-12-12 19:46:46.813144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.122 pt1 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.122 "name": "raid_bdev1", 00:18:04.122 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:04.122 "strip_size_kb": 0, 00:18:04.122 "state": "configuring", 00:18:04.122 "raid_level": "raid1", 00:18:04.122 "superblock": true, 00:18:04.122 "num_base_bdevs": 2, 00:18:04.122 "num_base_bdevs_discovered": 1, 00:18:04.122 "num_base_bdevs_operational": 2, 00:18:04.122 "base_bdevs_list": [ 00:18:04.122 { 00:18:04.122 "name": "pt1", 00:18:04.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.122 "is_configured": true, 00:18:04.122 "data_offset": 256, 00:18:04.122 "data_size": 7936 00:18:04.122 }, 00:18:04.122 { 00:18:04.122 "name": null, 00:18:04.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.122 "is_configured": false, 00:18:04.122 "data_offset": 256, 00:18:04.122 "data_size": 7936 00:18:04.122 } 00:18:04.122 ] 00:18:04.122 }' 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.122 19:46:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.690 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.690 [2024-12-12 19:46:47.290386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.690 [2024-12-12 19:46:47.290438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.690 [2024-12-12 19:46:47.290453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:04.690 [2024-12-12 19:46:47.290462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.690 [2024-12-12 19:46:47.290575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.690 [2024-12-12 19:46:47.290588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.690 [2024-12-12 19:46:47.290622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:04.690 [2024-12-12 19:46:47.290639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.690 [2024-12-12 19:46:47.290723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:04.690 [2024-12-12 19:46:47.290733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:04.691 [2024-12-12 19:46:47.290791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:04.691 [2024-12-12 19:46:47.290865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:04.691 [2024-12-12 19:46:47.290879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:04.691 [2024-12-12 19:46:47.290930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.691 pt2 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.691 "name": "raid_bdev1", 00:18:04.691 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:04.691 "strip_size_kb": 0, 00:18:04.691 "state": "online", 00:18:04.691 "raid_level": "raid1", 00:18:04.691 "superblock": true, 00:18:04.691 "num_base_bdevs": 2, 00:18:04.691 "num_base_bdevs_discovered": 2, 00:18:04.691 "num_base_bdevs_operational": 2, 00:18:04.691 "base_bdevs_list": [ 00:18:04.691 { 00:18:04.691 "name": "pt1", 00:18:04.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.691 "is_configured": true, 00:18:04.691 "data_offset": 256, 00:18:04.691 "data_size": 7936 00:18:04.691 }, 00:18:04.691 { 00:18:04.691 "name": "pt2", 00:18:04.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.691 "is_configured": true, 00:18:04.691 "data_offset": 256, 00:18:04.691 "data_size": 7936 00:18:04.691 } 00:18:04.691 ] 00:18:04.691 }' 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.691 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.950 [2024-12-12 19:46:47.758426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.950 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.210 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.210 "name": "raid_bdev1", 00:18:05.210 "aliases": [ 00:18:05.210 "bc958f33-96bb-4fc6-b6eb-c1cef199e837" 00:18:05.210 ], 00:18:05.210 "product_name": "Raid Volume", 00:18:05.210 "block_size": 4128, 00:18:05.210 "num_blocks": 7936, 00:18:05.210 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:05.210 "md_size": 32, 00:18:05.210 "md_interleave": true, 00:18:05.210 "dif_type": 0, 00:18:05.210 "assigned_rate_limits": { 00:18:05.210 "rw_ios_per_sec": 0, 00:18:05.210 "rw_mbytes_per_sec": 0, 00:18:05.210 "r_mbytes_per_sec": 0, 00:18:05.210 "w_mbytes_per_sec": 0 00:18:05.210 }, 00:18:05.210 "claimed": false, 00:18:05.210 "zoned": false, 00:18:05.210 "supported_io_types": { 00:18:05.210 "read": true, 00:18:05.210 "write": true, 00:18:05.210 "unmap": false, 00:18:05.210 "flush": false, 00:18:05.210 "reset": true, 00:18:05.210 "nvme_admin": false, 00:18:05.210 "nvme_io": false, 00:18:05.210 "nvme_io_md": false, 00:18:05.210 "write_zeroes": true, 00:18:05.210 "zcopy": false, 00:18:05.210 "get_zone_info": false, 00:18:05.210 "zone_management": false, 00:18:05.210 "zone_append": false, 00:18:05.210 "compare": false, 00:18:05.210 "compare_and_write": false, 00:18:05.210 "abort": false, 00:18:05.210 "seek_hole": false, 00:18:05.210 "seek_data": false, 00:18:05.210 "copy": false, 00:18:05.210 "nvme_iov_md": false 00:18:05.210 }, 00:18:05.210 "memory_domains": [ 00:18:05.210 { 00:18:05.210 "dma_device_id": "system", 00:18:05.210 "dma_device_type": 1 00:18:05.210 }, 00:18:05.210 { 00:18:05.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.210 "dma_device_type": 2 00:18:05.210 }, 00:18:05.210 { 00:18:05.210 "dma_device_id": "system", 00:18:05.210 "dma_device_type": 1 00:18:05.210 }, 00:18:05.210 { 00:18:05.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.211 "dma_device_type": 2 00:18:05.211 } 00:18:05.211 ], 00:18:05.211 "driver_specific": { 00:18:05.211 "raid": { 00:18:05.211 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:05.211 "strip_size_kb": 0, 00:18:05.211 "state": "online", 00:18:05.211 "raid_level": "raid1", 00:18:05.211 "superblock": true, 00:18:05.211 "num_base_bdevs": 2, 00:18:05.211 "num_base_bdevs_discovered": 2, 00:18:05.211 "num_base_bdevs_operational": 2, 00:18:05.211 "base_bdevs_list": [ 00:18:05.211 { 00:18:05.211 "name": "pt1", 00:18:05.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.211 "is_configured": true, 00:18:05.211 "data_offset": 256, 00:18:05.211 "data_size": 7936 00:18:05.211 }, 00:18:05.211 { 00:18:05.211 "name": "pt2", 00:18:05.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.211 "is_configured": true, 00:18:05.211 "data_offset": 256, 00:18:05.211 "data_size": 7936 00:18:05.211 } 00:18:05.211 ] 00:18:05.211 } 00:18:05.211 } 00:18:05.211 }' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:05.211 pt2' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:05.211 [2024-12-12 19:46:47.982021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.211 19:46:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' bc958f33-96bb-4fc6-b6eb-c1cef199e837 '!=' bc958f33-96bb-4fc6-b6eb-c1cef199e837 ']' 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 [2024-12-12 19:46:48.025744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.211 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.470 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.470 "name": "raid_bdev1", 00:18:05.470 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:05.470 "strip_size_kb": 0, 00:18:05.470 "state": "online", 00:18:05.470 "raid_level": "raid1", 00:18:05.470 "superblock": true, 00:18:05.470 "num_base_bdevs": 2, 00:18:05.470 "num_base_bdevs_discovered": 1, 00:18:05.470 "num_base_bdevs_operational": 1, 00:18:05.470 "base_bdevs_list": [ 00:18:05.470 { 00:18:05.470 "name": null, 00:18:05.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.470 "is_configured": false, 00:18:05.470 "data_offset": 0, 00:18:05.470 "data_size": 7936 00:18:05.470 }, 00:18:05.470 { 00:18:05.470 "name": "pt2", 00:18:05.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.470 "is_configured": true, 00:18:05.470 "data_offset": 256, 00:18:05.470 "data_size": 7936 00:18:05.470 } 00:18:05.470 ] 00:18:05.470 }' 00:18:05.470 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.470 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.729 [2024-12-12 19:46:48.421038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.729 [2024-12-12 19:46:48.421098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.729 [2024-12-12 19:46:48.421184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.729 [2024-12-12 19:46:48.421235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.729 [2024-12-12 19:46:48.421296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.729 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.730 [2024-12-12 19:46:48.496921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.730 [2024-12-12 19:46:48.496968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.730 [2024-12-12 19:46:48.496981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:05.730 [2024-12-12 19:46:48.496990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.730 [2024-12-12 19:46:48.498717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.730 [2024-12-12 19:46:48.498797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.730 [2024-12-12 19:46:48.498851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:05.730 [2024-12-12 19:46:48.498894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.730 [2024-12-12 19:46:48.498954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.730 [2024-12-12 19:46:48.498964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:05.730 [2024-12-12 19:46:48.499043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:05.730 [2024-12-12 19:46:48.499102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.730 [2024-12-12 19:46:48.499109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:05.730 [2024-12-12 19:46:48.499159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.730 pt2 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.730 "name": "raid_bdev1", 00:18:05.730 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:05.730 "strip_size_kb": 0, 00:18:05.730 "state": "online", 00:18:05.730 "raid_level": "raid1", 00:18:05.730 "superblock": true, 00:18:05.730 "num_base_bdevs": 2, 00:18:05.730 "num_base_bdevs_discovered": 1, 00:18:05.730 "num_base_bdevs_operational": 1, 00:18:05.730 "base_bdevs_list": [ 00:18:05.730 { 00:18:05.730 "name": null, 00:18:05.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.730 "is_configured": false, 00:18:05.730 "data_offset": 256, 00:18:05.730 "data_size": 7936 00:18:05.730 }, 00:18:05.730 { 00:18:05.730 "name": "pt2", 00:18:05.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.730 "is_configured": true, 00:18:05.730 "data_offset": 256, 00:18:05.730 "data_size": 7936 00:18:05.730 } 00:18:05.730 ] 00:18:05.730 }' 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.730 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.298 [2024-12-12 19:46:48.884242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.298 [2024-12-12 19:46:48.884307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.298 [2024-12-12 19:46:48.884389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.298 [2024-12-12 19:46:48.884441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.298 [2024-12-12 19:46:48.884499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:06.298 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.299 [2024-12-12 19:46:48.944170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.299 [2024-12-12 19:46:48.944215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.299 [2024-12-12 19:46:48.944232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:06.299 [2024-12-12 19:46:48.944240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.299 [2024-12-12 19:46:48.946087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.299 [2024-12-12 19:46:48.946160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.299 [2024-12-12 19:46:48.946209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:06.299 [2024-12-12 19:46:48.946248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.299 [2024-12-12 19:46:48.946354] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:06.299 [2024-12-12 19:46:48.946364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.299 [2024-12-12 19:46:48.946390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:06.299 [2024-12-12 19:46:48.946443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.299 [2024-12-12 19:46:48.946502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:06.299 [2024-12-12 19:46:48.946510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:06.299 [2024-12-12 19:46:48.946589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.299 [2024-12-12 19:46:48.946646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:06.299 [2024-12-12 19:46:48.946654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:06.299 [2024-12-12 19:46:48.946712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.299 pt1 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.299 "name": "raid_bdev1", 00:18:06.299 "uuid": "bc958f33-96bb-4fc6-b6eb-c1cef199e837", 00:18:06.299 "strip_size_kb": 0, 00:18:06.299 "state": "online", 00:18:06.299 "raid_level": "raid1", 00:18:06.299 "superblock": true, 00:18:06.299 "num_base_bdevs": 2, 00:18:06.299 "num_base_bdevs_discovered": 1, 00:18:06.299 "num_base_bdevs_operational": 1, 00:18:06.299 "base_bdevs_list": [ 00:18:06.299 { 00:18:06.299 "name": null, 00:18:06.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.299 "is_configured": false, 00:18:06.299 "data_offset": 256, 00:18:06.299 "data_size": 7936 00:18:06.299 }, 00:18:06.299 { 00:18:06.299 "name": "pt2", 00:18:06.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.299 "is_configured": true, 00:18:06.299 "data_offset": 256, 00:18:06.299 "data_size": 7936 00:18:06.299 } 00:18:06.299 ] 00:18:06.299 }' 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.299 19:46:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.558 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:06.818 [2024-12-12 19:46:49.407573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' bc958f33-96bb-4fc6-b6eb-c1cef199e837 '!=' bc958f33-96bb-4fc6-b6eb-c1cef199e837 ']' 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 90339 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90339 ']' 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90339 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90339 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90339' 00:18:06.818 killing process with pid 90339 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 90339 00:18:06.818 [2024-12-12 19:46:49.485565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.818 [2024-12-12 19:46:49.485630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.818 [2024-12-12 19:46:49.485664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.818 [2024-12-12 19:46:49.485676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:06.818 19:46:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 90339 00:18:07.078 [2024-12-12 19:46:49.675940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.019 19:46:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:08.019 00:18:08.019 real 0m5.842s 00:18:08.019 user 0m8.777s 00:18:08.019 sys 0m1.100s 00:18:08.019 ************************************ 00:18:08.019 END TEST raid_superblock_test_md_interleaved 00:18:08.019 ************************************ 00:18:08.019 19:46:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.019 19:46:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 19:46:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:08.019 19:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:08.019 19:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.019 19:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 ************************************ 00:18:08.019 START TEST raid_rebuild_test_sb_md_interleaved 00:18:08.019 ************************************ 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90660 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90660 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90660 ']' 00:18:08.019 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.020 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.020 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.020 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.020 19:46:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.280 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:08.280 Zero copy mechanism will not be used. 00:18:08.280 [2024-12-12 19:46:50.923811] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:08.280 [2024-12-12 19:46:50.923963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90660 ] 00:18:08.280 [2024-12-12 19:46:51.101092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.540 [2024-12-12 19:46:51.211844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.799 [2024-12-12 19:46:51.402582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.799 [2024-12-12 19:46:51.402637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 BaseBdev1_malloc 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 [2024-12-12 19:46:51.790611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:09.060 [2024-12-12 19:46:51.790725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.060 [2024-12-12 19:46:51.790749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:09.060 [2024-12-12 19:46:51.790761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.060 [2024-12-12 19:46:51.792504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.060 [2024-12-12 19:46:51.792560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.060 BaseBdev1 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 BaseBdev2_malloc 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 [2024-12-12 19:46:51.844164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:09.060 [2024-12-12 19:46:51.844218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.060 [2024-12-12 19:46:51.844235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:09.060 [2024-12-12 19:46:51.844247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.060 [2024-12-12 19:46:51.846008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.060 [2024-12-12 19:46:51.846097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:09.060 BaseBdev2 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.320 spare_malloc 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.320 spare_delay 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.320 [2024-12-12 19:46:51.923185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.320 [2024-12-12 19:46:51.923239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.320 [2024-12-12 19:46:51.923256] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:09.320 [2024-12-12 19:46:51.923266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.320 [2024-12-12 19:46:51.925037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.320 [2024-12-12 19:46:51.925138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.320 spare 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.320 [2024-12-12 19:46:51.935211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.320 [2024-12-12 19:46:51.936966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.320 [2024-12-12 19:46:51.937153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:09.320 [2024-12-12 19:46:51.937169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:09.320 [2024-12-12 19:46:51.937235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:09.320 [2024-12-12 19:46:51.937309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:09.320 [2024-12-12 19:46:51.937315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:09.320 [2024-12-12 19:46:51.937373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.320 "name": "raid_bdev1", 00:18:09.320 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:09.320 "strip_size_kb": 0, 00:18:09.320 "state": "online", 00:18:09.320 "raid_level": "raid1", 00:18:09.320 "superblock": true, 00:18:09.320 "num_base_bdevs": 2, 00:18:09.320 "num_base_bdevs_discovered": 2, 00:18:09.320 "num_base_bdevs_operational": 2, 00:18:09.320 "base_bdevs_list": [ 00:18:09.320 { 00:18:09.320 "name": "BaseBdev1", 00:18:09.320 "uuid": "a081f0aa-d75e-5dcc-b5be-070e02d3d96b", 00:18:09.320 "is_configured": true, 00:18:09.320 "data_offset": 256, 00:18:09.320 "data_size": 7936 00:18:09.320 }, 00:18:09.320 { 00:18:09.320 "name": "BaseBdev2", 00:18:09.320 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:09.320 "is_configured": true, 00:18:09.320 "data_offset": 256, 00:18:09.320 "data_size": 7936 00:18:09.320 } 00:18:09.320 ] 00:18:09.320 }' 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.320 19:46:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.580 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.580 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:09.580 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.580 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.580 [2024-12-12 19:46:52.394676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.580 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.838 [2024-12-12 19:46:52.490408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.838 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.839 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.839 "name": "raid_bdev1", 00:18:09.839 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:09.839 "strip_size_kb": 0, 00:18:09.839 "state": "online", 00:18:09.839 "raid_level": "raid1", 00:18:09.839 "superblock": true, 00:18:09.839 "num_base_bdevs": 2, 00:18:09.839 "num_base_bdevs_discovered": 1, 00:18:09.839 "num_base_bdevs_operational": 1, 00:18:09.839 "base_bdevs_list": [ 00:18:09.839 { 00:18:09.839 "name": null, 00:18:09.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.839 "is_configured": false, 00:18:09.839 "data_offset": 0, 00:18:09.839 "data_size": 7936 00:18:09.839 }, 00:18:09.839 { 00:18:09.839 "name": "BaseBdev2", 00:18:09.839 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:09.839 "is_configured": true, 00:18:09.839 "data_offset": 256, 00:18:09.839 "data_size": 7936 00:18:09.839 } 00:18:09.839 ] 00:18:09.839 }' 00:18:09.839 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.839 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.406 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.406 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.406 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.406 [2024-12-12 19:46:52.965742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.406 [2024-12-12 19:46:52.982806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:10.406 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.406 19:46:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:10.406 [2024-12-12 19:46:52.984518] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.345 19:46:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.345 "name": "raid_bdev1", 00:18:11.345 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:11.345 "strip_size_kb": 0, 00:18:11.345 "state": "online", 00:18:11.345 "raid_level": "raid1", 00:18:11.345 "superblock": true, 00:18:11.345 "num_base_bdevs": 2, 00:18:11.345 "num_base_bdevs_discovered": 2, 00:18:11.345 "num_base_bdevs_operational": 2, 00:18:11.345 "process": { 00:18:11.345 "type": "rebuild", 00:18:11.345 "target": "spare", 00:18:11.345 "progress": { 00:18:11.345 "blocks": 2560, 00:18:11.345 "percent": 32 00:18:11.345 } 00:18:11.345 }, 00:18:11.345 "base_bdevs_list": [ 00:18:11.345 { 00:18:11.345 "name": "spare", 00:18:11.345 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:11.345 "is_configured": true, 00:18:11.345 "data_offset": 256, 00:18:11.345 "data_size": 7936 00:18:11.345 }, 00:18:11.345 { 00:18:11.345 "name": "BaseBdev2", 00:18:11.345 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:11.345 "is_configured": true, 00:18:11.345 "data_offset": 256, 00:18:11.345 "data_size": 7936 00:18:11.345 } 00:18:11.345 ] 00:18:11.345 }' 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.345 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.345 [2024-12-12 19:46:54.136380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.605 [2024-12-12 19:46:54.189351] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.605 [2024-12-12 19:46:54.189459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.605 [2024-12-12 19:46:54.189494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.605 [2024-12-12 19:46:54.189521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.605 "name": "raid_bdev1", 00:18:11.605 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:11.605 "strip_size_kb": 0, 00:18:11.605 "state": "online", 00:18:11.605 "raid_level": "raid1", 00:18:11.605 "superblock": true, 00:18:11.605 "num_base_bdevs": 2, 00:18:11.605 "num_base_bdevs_discovered": 1, 00:18:11.605 "num_base_bdevs_operational": 1, 00:18:11.605 "base_bdevs_list": [ 00:18:11.605 { 00:18:11.605 "name": null, 00:18:11.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.605 "is_configured": false, 00:18:11.605 "data_offset": 0, 00:18:11.605 "data_size": 7936 00:18:11.605 }, 00:18:11.605 { 00:18:11.605 "name": "BaseBdev2", 00:18:11.605 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:11.605 "is_configured": true, 00:18:11.605 "data_offset": 256, 00:18:11.605 "data_size": 7936 00:18:11.605 } 00:18:11.605 ] 00:18:11.605 }' 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.605 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.865 "name": "raid_bdev1", 00:18:11.865 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:11.865 "strip_size_kb": 0, 00:18:11.865 "state": "online", 00:18:11.865 "raid_level": "raid1", 00:18:11.865 "superblock": true, 00:18:11.865 "num_base_bdevs": 2, 00:18:11.865 "num_base_bdevs_discovered": 1, 00:18:11.865 "num_base_bdevs_operational": 1, 00:18:11.865 "base_bdevs_list": [ 00:18:11.865 { 00:18:11.865 "name": null, 00:18:11.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.865 "is_configured": false, 00:18:11.865 "data_offset": 0, 00:18:11.865 "data_size": 7936 00:18:11.865 }, 00:18:11.865 { 00:18:11.865 "name": "BaseBdev2", 00:18:11.865 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:11.865 "is_configured": true, 00:18:11.865 "data_offset": 256, 00:18:11.865 "data_size": 7936 00:18:11.865 } 00:18:11.865 ] 00:18:11.865 }' 00:18:11.865 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.125 [2024-12-12 19:46:54.781917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.125 [2024-12-12 19:46:54.796944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.125 19:46:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:12.125 [2024-12-12 19:46:54.798646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.074 "name": "raid_bdev1", 00:18:13.074 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:13.074 "strip_size_kb": 0, 00:18:13.074 "state": "online", 00:18:13.074 "raid_level": "raid1", 00:18:13.074 "superblock": true, 00:18:13.074 "num_base_bdevs": 2, 00:18:13.074 "num_base_bdevs_discovered": 2, 00:18:13.074 "num_base_bdevs_operational": 2, 00:18:13.074 "process": { 00:18:13.074 "type": "rebuild", 00:18:13.074 "target": "spare", 00:18:13.074 "progress": { 00:18:13.074 "blocks": 2560, 00:18:13.074 "percent": 32 00:18:13.074 } 00:18:13.074 }, 00:18:13.074 "base_bdevs_list": [ 00:18:13.074 { 00:18:13.074 "name": "spare", 00:18:13.074 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:13.074 "is_configured": true, 00:18:13.074 "data_offset": 256, 00:18:13.074 "data_size": 7936 00:18:13.074 }, 00:18:13.074 { 00:18:13.074 "name": "BaseBdev2", 00:18:13.074 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:13.074 "is_configured": true, 00:18:13.074 "data_offset": 256, 00:18:13.074 "data_size": 7936 00:18:13.074 } 00:18:13.074 ] 00:18:13.074 }' 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.074 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:13.334 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.334 "name": "raid_bdev1", 00:18:13.334 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:13.334 "strip_size_kb": 0, 00:18:13.334 "state": "online", 00:18:13.334 "raid_level": "raid1", 00:18:13.334 "superblock": true, 00:18:13.334 "num_base_bdevs": 2, 00:18:13.334 "num_base_bdevs_discovered": 2, 00:18:13.334 "num_base_bdevs_operational": 2, 00:18:13.334 "process": { 00:18:13.334 "type": "rebuild", 00:18:13.334 "target": "spare", 00:18:13.334 "progress": { 00:18:13.334 "blocks": 2816, 00:18:13.334 "percent": 35 00:18:13.334 } 00:18:13.334 }, 00:18:13.334 "base_bdevs_list": [ 00:18:13.334 { 00:18:13.334 "name": "spare", 00:18:13.334 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:13.334 "is_configured": true, 00:18:13.334 "data_offset": 256, 00:18:13.334 "data_size": 7936 00:18:13.334 }, 00:18:13.334 { 00:18:13.334 "name": "BaseBdev2", 00:18:13.334 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:13.334 "is_configured": true, 00:18:13.334 "data_offset": 256, 00:18:13.334 "data_size": 7936 00:18:13.334 } 00:18:13.334 ] 00:18:13.334 }' 00:18:13.334 19:46:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.334 19:46:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.334 19:46:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.334 19:46:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.334 19:46:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.273 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.533 "name": "raid_bdev1", 00:18:14.533 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:14.533 "strip_size_kb": 0, 00:18:14.533 "state": "online", 00:18:14.533 "raid_level": "raid1", 00:18:14.533 "superblock": true, 00:18:14.533 "num_base_bdevs": 2, 00:18:14.533 "num_base_bdevs_discovered": 2, 00:18:14.533 "num_base_bdevs_operational": 2, 00:18:14.533 "process": { 00:18:14.533 "type": "rebuild", 00:18:14.533 "target": "spare", 00:18:14.533 "progress": { 00:18:14.533 "blocks": 5632, 00:18:14.533 "percent": 70 00:18:14.533 } 00:18:14.533 }, 00:18:14.533 "base_bdevs_list": [ 00:18:14.533 { 00:18:14.533 "name": "spare", 00:18:14.533 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:14.533 "is_configured": true, 00:18:14.533 "data_offset": 256, 00:18:14.533 "data_size": 7936 00:18:14.533 }, 00:18:14.533 { 00:18:14.533 "name": "BaseBdev2", 00:18:14.533 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:14.533 "is_configured": true, 00:18:14.533 "data_offset": 256, 00:18:14.533 "data_size": 7936 00:18:14.533 } 00:18:14.533 ] 00:18:14.533 }' 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.533 19:46:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.101 [2024-12-12 19:46:57.910598] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:15.101 [2024-12-12 19:46:57.910714] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:15.101 [2024-12-12 19:46:57.910841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.361 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.621 "name": "raid_bdev1", 00:18:15.621 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:15.621 "strip_size_kb": 0, 00:18:15.621 "state": "online", 00:18:15.621 "raid_level": "raid1", 00:18:15.621 "superblock": true, 00:18:15.621 "num_base_bdevs": 2, 00:18:15.621 "num_base_bdevs_discovered": 2, 00:18:15.621 "num_base_bdevs_operational": 2, 00:18:15.621 "base_bdevs_list": [ 00:18:15.621 { 00:18:15.621 "name": "spare", 00:18:15.621 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:15.621 "is_configured": true, 00:18:15.621 "data_offset": 256, 00:18:15.621 "data_size": 7936 00:18:15.621 }, 00:18:15.621 { 00:18:15.621 "name": "BaseBdev2", 00:18:15.621 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:15.621 "is_configured": true, 00:18:15.621 "data_offset": 256, 00:18:15.621 "data_size": 7936 00:18:15.621 } 00:18:15.621 ] 00:18:15.621 }' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.621 "name": "raid_bdev1", 00:18:15.621 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:15.621 "strip_size_kb": 0, 00:18:15.621 "state": "online", 00:18:15.621 "raid_level": "raid1", 00:18:15.621 "superblock": true, 00:18:15.621 "num_base_bdevs": 2, 00:18:15.621 "num_base_bdevs_discovered": 2, 00:18:15.621 "num_base_bdevs_operational": 2, 00:18:15.621 "base_bdevs_list": [ 00:18:15.621 { 00:18:15.621 "name": "spare", 00:18:15.621 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:15.621 "is_configured": true, 00:18:15.621 "data_offset": 256, 00:18:15.621 "data_size": 7936 00:18:15.621 }, 00:18:15.621 { 00:18:15.621 "name": "BaseBdev2", 00:18:15.621 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:15.621 "is_configured": true, 00:18:15.621 "data_offset": 256, 00:18:15.621 "data_size": 7936 00:18:15.621 } 00:18:15.621 ] 00:18:15.621 }' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.621 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.881 "name": "raid_bdev1", 00:18:15.881 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:15.881 "strip_size_kb": 0, 00:18:15.881 "state": "online", 00:18:15.881 "raid_level": "raid1", 00:18:15.881 "superblock": true, 00:18:15.881 "num_base_bdevs": 2, 00:18:15.881 "num_base_bdevs_discovered": 2, 00:18:15.881 "num_base_bdevs_operational": 2, 00:18:15.881 "base_bdevs_list": [ 00:18:15.881 { 00:18:15.881 "name": "spare", 00:18:15.881 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:15.881 "is_configured": true, 00:18:15.881 "data_offset": 256, 00:18:15.881 "data_size": 7936 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "name": "BaseBdev2", 00:18:15.881 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:15.881 "is_configured": true, 00:18:15.881 "data_offset": 256, 00:18:15.881 "data_size": 7936 00:18:15.881 } 00:18:15.881 ] 00:18:15.881 }' 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.881 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 [2024-12-12 19:46:58.842056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.141 [2024-12-12 19:46:58.842128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.141 [2024-12-12 19:46:58.842220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.141 [2024-12-12 19:46:58.842361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.141 [2024-12-12 19:46:58.842412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 [2024-12-12 19:46:58.913924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.141 [2024-12-12 19:46:58.913973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.141 [2024-12-12 19:46:58.913995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:16.141 [2024-12-12 19:46:58.914002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.141 [2024-12-12 19:46:58.915901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.141 [2024-12-12 19:46:58.915984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.141 [2024-12-12 19:46:58.916048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.141 [2024-12-12 19:46:58.916103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.141 [2024-12-12 19:46:58.916209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.141 spare 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 19:46:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.400 [2024-12-12 19:46:59.016090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:16.400 [2024-12-12 19:46:59.016116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:16.400 [2024-12-12 19:46:59.016199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:16.400 [2024-12-12 19:46:59.016274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:16.400 [2024-12-12 19:46:59.016284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:16.400 [2024-12-12 19:46:59.016354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.400 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.400 "name": "raid_bdev1", 00:18:16.400 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:16.400 "strip_size_kb": 0, 00:18:16.401 "state": "online", 00:18:16.401 "raid_level": "raid1", 00:18:16.401 "superblock": true, 00:18:16.401 "num_base_bdevs": 2, 00:18:16.401 "num_base_bdevs_discovered": 2, 00:18:16.401 "num_base_bdevs_operational": 2, 00:18:16.401 "base_bdevs_list": [ 00:18:16.401 { 00:18:16.401 "name": "spare", 00:18:16.401 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:16.401 "is_configured": true, 00:18:16.401 "data_offset": 256, 00:18:16.401 "data_size": 7936 00:18:16.401 }, 00:18:16.401 { 00:18:16.401 "name": "BaseBdev2", 00:18:16.401 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:16.401 "is_configured": true, 00:18:16.401 "data_offset": 256, 00:18:16.401 "data_size": 7936 00:18:16.401 } 00:18:16.401 ] 00:18:16.401 }' 00:18:16.401 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.401 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.660 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.921 "name": "raid_bdev1", 00:18:16.921 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:16.921 "strip_size_kb": 0, 00:18:16.921 "state": "online", 00:18:16.921 "raid_level": "raid1", 00:18:16.921 "superblock": true, 00:18:16.921 "num_base_bdevs": 2, 00:18:16.921 "num_base_bdevs_discovered": 2, 00:18:16.921 "num_base_bdevs_operational": 2, 00:18:16.921 "base_bdevs_list": [ 00:18:16.921 { 00:18:16.921 "name": "spare", 00:18:16.921 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:16.921 "is_configured": true, 00:18:16.921 "data_offset": 256, 00:18:16.921 "data_size": 7936 00:18:16.921 }, 00:18:16.921 { 00:18:16.921 "name": "BaseBdev2", 00:18:16.921 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:16.921 "is_configured": true, 00:18:16.921 "data_offset": 256, 00:18:16.921 "data_size": 7936 00:18:16.921 } 00:18:16.921 ] 00:18:16.921 }' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.921 [2024-12-12 19:46:59.692731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.921 "name": "raid_bdev1", 00:18:16.921 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:16.921 "strip_size_kb": 0, 00:18:16.921 "state": "online", 00:18:16.921 "raid_level": "raid1", 00:18:16.921 "superblock": true, 00:18:16.921 "num_base_bdevs": 2, 00:18:16.921 "num_base_bdevs_discovered": 1, 00:18:16.921 "num_base_bdevs_operational": 1, 00:18:16.921 "base_bdevs_list": [ 00:18:16.921 { 00:18:16.921 "name": null, 00:18:16.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.921 "is_configured": false, 00:18:16.921 "data_offset": 0, 00:18:16.921 "data_size": 7936 00:18:16.921 }, 00:18:16.921 { 00:18:16.921 "name": "BaseBdev2", 00:18:16.921 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:16.921 "is_configured": true, 00:18:16.921 "data_offset": 256, 00:18:16.921 "data_size": 7936 00:18:16.921 } 00:18:16.921 ] 00:18:16.921 }' 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.921 19:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.526 19:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.526 19:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.526 19:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.526 [2024-12-12 19:47:00.104022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.527 [2024-12-12 19:47:00.104249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.527 [2024-12-12 19:47:00.104314] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.527 [2024-12-12 19:47:00.104393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.527 [2024-12-12 19:47:00.119228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:17.527 19:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.527 19:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:17.527 [2024-12-12 19:47:00.121131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.466 "name": "raid_bdev1", 00:18:18.466 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:18.466 "strip_size_kb": 0, 00:18:18.466 "state": "online", 00:18:18.466 "raid_level": "raid1", 00:18:18.466 "superblock": true, 00:18:18.466 "num_base_bdevs": 2, 00:18:18.466 "num_base_bdevs_discovered": 2, 00:18:18.466 "num_base_bdevs_operational": 2, 00:18:18.466 "process": { 00:18:18.466 "type": "rebuild", 00:18:18.466 "target": "spare", 00:18:18.466 "progress": { 00:18:18.466 "blocks": 2560, 00:18:18.466 "percent": 32 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 "base_bdevs_list": [ 00:18:18.466 { 00:18:18.466 "name": "spare", 00:18:18.466 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:18.466 "is_configured": true, 00:18:18.466 "data_offset": 256, 00:18:18.466 "data_size": 7936 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "name": "BaseBdev2", 00:18:18.466 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:18.466 "is_configured": true, 00:18:18.466 "data_offset": 256, 00:18:18.466 "data_size": 7936 00:18:18.466 } 00:18:18.466 ] 00:18:18.466 }' 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.466 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.466 [2024-12-12 19:47:01.288589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.726 [2024-12-12 19:47:01.325986] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.726 [2024-12-12 19:47:01.326048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.726 [2024-12-12 19:47:01.326062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.726 [2024-12-12 19:47:01.326070] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.726 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.726 "name": "raid_bdev1", 00:18:18.726 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:18.726 "strip_size_kb": 0, 00:18:18.726 "state": "online", 00:18:18.726 "raid_level": "raid1", 00:18:18.726 "superblock": true, 00:18:18.726 "num_base_bdevs": 2, 00:18:18.726 "num_base_bdevs_discovered": 1, 00:18:18.726 "num_base_bdevs_operational": 1, 00:18:18.726 "base_bdevs_list": [ 00:18:18.726 { 00:18:18.726 "name": null, 00:18:18.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.726 "is_configured": false, 00:18:18.726 "data_offset": 0, 00:18:18.726 "data_size": 7936 00:18:18.726 }, 00:18:18.726 { 00:18:18.726 "name": "BaseBdev2", 00:18:18.727 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:18.727 "is_configured": true, 00:18:18.727 "data_offset": 256, 00:18:18.727 "data_size": 7936 00:18:18.727 } 00:18:18.727 ] 00:18:18.727 }' 00:18:18.727 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.727 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.986 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.986 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.986 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.986 [2024-12-12 19:47:01.746241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.986 [2024-12-12 19:47:01.746380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.986 [2024-12-12 19:47:01.746427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:18.986 [2024-12-12 19:47:01.746479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.986 [2024-12-12 19:47:01.746723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.986 [2024-12-12 19:47:01.746776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.986 [2024-12-12 19:47:01.746863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.986 [2024-12-12 19:47:01.746903] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.986 [2024-12-12 19:47:01.746949] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:18.986 [2024-12-12 19:47:01.747006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.986 [2024-12-12 19:47:01.761952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:18.986 spare 00:18:18.986 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.986 [2024-12-12 19:47:01.763788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.986 19:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:19.925 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.183 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.183 "name": "raid_bdev1", 00:18:20.183 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:20.183 "strip_size_kb": 0, 00:18:20.183 "state": "online", 00:18:20.183 "raid_level": "raid1", 00:18:20.183 "superblock": true, 00:18:20.183 "num_base_bdevs": 2, 00:18:20.183 "num_base_bdevs_discovered": 2, 00:18:20.183 "num_base_bdevs_operational": 2, 00:18:20.183 "process": { 00:18:20.183 "type": "rebuild", 00:18:20.183 "target": "spare", 00:18:20.183 "progress": { 00:18:20.183 "blocks": 2560, 00:18:20.183 "percent": 32 00:18:20.183 } 00:18:20.183 }, 00:18:20.183 "base_bdevs_list": [ 00:18:20.183 { 00:18:20.183 "name": "spare", 00:18:20.183 "uuid": "6bf09f04-93d3-54a3-93b4-e7ac38caaadb", 00:18:20.183 "is_configured": true, 00:18:20.183 "data_offset": 256, 00:18:20.183 "data_size": 7936 00:18:20.183 }, 00:18:20.183 { 00:18:20.184 "name": "BaseBdev2", 00:18:20.184 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:20.184 "is_configured": true, 00:18:20.184 "data_offset": 256, 00:18:20.184 "data_size": 7936 00:18:20.184 } 00:18:20.184 ] 00:18:20.184 }' 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.184 19:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.184 [2024-12-12 19:47:02.907663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.184 [2024-12-12 19:47:02.968557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.184 [2024-12-12 19:47:02.968657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.184 [2024-12-12 19:47:02.968676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.184 [2024-12-12 19:47:02.968683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.184 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.442 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.442 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.442 "name": "raid_bdev1", 00:18:20.442 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:20.442 "strip_size_kb": 0, 00:18:20.442 "state": "online", 00:18:20.442 "raid_level": "raid1", 00:18:20.442 "superblock": true, 00:18:20.442 "num_base_bdevs": 2, 00:18:20.442 "num_base_bdevs_discovered": 1, 00:18:20.442 "num_base_bdevs_operational": 1, 00:18:20.442 "base_bdevs_list": [ 00:18:20.442 { 00:18:20.442 "name": null, 00:18:20.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.442 "is_configured": false, 00:18:20.442 "data_offset": 0, 00:18:20.442 "data_size": 7936 00:18:20.442 }, 00:18:20.442 { 00:18:20.442 "name": "BaseBdev2", 00:18:20.442 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:20.442 "is_configured": true, 00:18:20.442 "data_offset": 256, 00:18:20.442 "data_size": 7936 00:18:20.442 } 00:18:20.442 ] 00:18:20.442 }' 00:18:20.442 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.443 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.702 "name": "raid_bdev1", 00:18:20.702 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:20.702 "strip_size_kb": 0, 00:18:20.702 "state": "online", 00:18:20.702 "raid_level": "raid1", 00:18:20.702 "superblock": true, 00:18:20.702 "num_base_bdevs": 2, 00:18:20.702 "num_base_bdevs_discovered": 1, 00:18:20.702 "num_base_bdevs_operational": 1, 00:18:20.702 "base_bdevs_list": [ 00:18:20.702 { 00:18:20.702 "name": null, 00:18:20.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.702 "is_configured": false, 00:18:20.702 "data_offset": 0, 00:18:20.702 "data_size": 7936 00:18:20.702 }, 00:18:20.702 { 00:18:20.702 "name": "BaseBdev2", 00:18:20.702 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:20.702 "is_configured": true, 00:18:20.702 "data_offset": 256, 00:18:20.702 "data_size": 7936 00:18:20.702 } 00:18:20.702 ] 00:18:20.702 }' 00:18:20.702 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.961 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.961 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.962 [2024-12-12 19:47:03.615888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.962 [2024-12-12 19:47:03.615943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.962 [2024-12-12 19:47:03.615961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:20.962 [2024-12-12 19:47:03.615970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.962 [2024-12-12 19:47:03.616141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.962 [2024-12-12 19:47:03.616163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.962 [2024-12-12 19:47:03.616211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:20.962 [2024-12-12 19:47:03.616226] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.962 [2024-12-12 19:47:03.616264] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.962 [2024-12-12 19:47:03.616273] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:20.962 BaseBdev1 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.962 19:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.900 "name": "raid_bdev1", 00:18:21.900 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:21.900 "strip_size_kb": 0, 00:18:21.900 "state": "online", 00:18:21.900 "raid_level": "raid1", 00:18:21.900 "superblock": true, 00:18:21.900 "num_base_bdevs": 2, 00:18:21.900 "num_base_bdevs_discovered": 1, 00:18:21.900 "num_base_bdevs_operational": 1, 00:18:21.900 "base_bdevs_list": [ 00:18:21.900 { 00:18:21.900 "name": null, 00:18:21.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.900 "is_configured": false, 00:18:21.900 "data_offset": 0, 00:18:21.900 "data_size": 7936 00:18:21.900 }, 00:18:21.900 { 00:18:21.900 "name": "BaseBdev2", 00:18:21.900 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:21.900 "is_configured": true, 00:18:21.900 "data_offset": 256, 00:18:21.900 "data_size": 7936 00:18:21.900 } 00:18:21.900 ] 00:18:21.900 }' 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.900 19:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.470 "name": "raid_bdev1", 00:18:22.470 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:22.470 "strip_size_kb": 0, 00:18:22.470 "state": "online", 00:18:22.470 "raid_level": "raid1", 00:18:22.470 "superblock": true, 00:18:22.470 "num_base_bdevs": 2, 00:18:22.470 "num_base_bdevs_discovered": 1, 00:18:22.470 "num_base_bdevs_operational": 1, 00:18:22.470 "base_bdevs_list": [ 00:18:22.470 { 00:18:22.470 "name": null, 00:18:22.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.470 "is_configured": false, 00:18:22.470 "data_offset": 0, 00:18:22.470 "data_size": 7936 00:18:22.470 }, 00:18:22.470 { 00:18:22.470 "name": "BaseBdev2", 00:18:22.470 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:22.470 "is_configured": true, 00:18:22.470 "data_offset": 256, 00:18:22.470 "data_size": 7936 00:18:22.470 } 00:18:22.470 ] 00:18:22.470 }' 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.470 [2024-12-12 19:47:05.213291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.470 [2024-12-12 19:47:05.213429] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.470 [2024-12-12 19:47:05.213469] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.470 request: 00:18:22.470 { 00:18:22.470 "base_bdev": "BaseBdev1", 00:18:22.470 "raid_bdev": "raid_bdev1", 00:18:22.470 "method": "bdev_raid_add_base_bdev", 00:18:22.470 "req_id": 1 00:18:22.470 } 00:18:22.470 Got JSON-RPC error response 00:18:22.470 response: 00:18:22.470 { 00:18:22.470 "code": -22, 00:18:22.470 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:22.470 } 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.470 19:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.408 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.667 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.667 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.667 "name": "raid_bdev1", 00:18:23.667 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:23.667 "strip_size_kb": 0, 00:18:23.667 "state": "online", 00:18:23.667 "raid_level": "raid1", 00:18:23.668 "superblock": true, 00:18:23.668 "num_base_bdevs": 2, 00:18:23.668 "num_base_bdevs_discovered": 1, 00:18:23.668 "num_base_bdevs_operational": 1, 00:18:23.668 "base_bdevs_list": [ 00:18:23.668 { 00:18:23.668 "name": null, 00:18:23.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.668 "is_configured": false, 00:18:23.668 "data_offset": 0, 00:18:23.668 "data_size": 7936 00:18:23.668 }, 00:18:23.668 { 00:18:23.668 "name": "BaseBdev2", 00:18:23.668 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:23.668 "is_configured": true, 00:18:23.668 "data_offset": 256, 00:18:23.668 "data_size": 7936 00:18:23.668 } 00:18:23.668 ] 00:18:23.668 }' 00:18:23.668 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.668 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.926 "name": "raid_bdev1", 00:18:23.926 "uuid": "e90c4d24-7c9f-43a0-aa45-675bd0222c6a", 00:18:23.926 "strip_size_kb": 0, 00:18:23.926 "state": "online", 00:18:23.926 "raid_level": "raid1", 00:18:23.926 "superblock": true, 00:18:23.926 "num_base_bdevs": 2, 00:18:23.926 "num_base_bdevs_discovered": 1, 00:18:23.926 "num_base_bdevs_operational": 1, 00:18:23.926 "base_bdevs_list": [ 00:18:23.926 { 00:18:23.926 "name": null, 00:18:23.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.926 "is_configured": false, 00:18:23.926 "data_offset": 0, 00:18:23.926 "data_size": 7936 00:18:23.926 }, 00:18:23.926 { 00:18:23.926 "name": "BaseBdev2", 00:18:23.926 "uuid": "5dc95952-b97e-51cf-8570-741c25d1aa2e", 00:18:23.926 "is_configured": true, 00:18:23.926 "data_offset": 256, 00:18:23.926 "data_size": 7936 00:18:23.926 } 00:18:23.926 ] 00:18:23.926 }' 00:18:23.926 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.185 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.185 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.185 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90660 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90660 ']' 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90660 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90660 00:18:24.186 killing process with pid 90660 00:18:24.186 Received shutdown signal, test time was about 60.000000 seconds 00:18:24.186 00:18:24.186 Latency(us) 00:18:24.186 [2024-12-12T19:47:07.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.186 [2024-12-12T19:47:07.031Z] =================================================================================================================== 00:18:24.186 [2024-12-12T19:47:07.031Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90660' 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90660 00:18:24.186 [2024-12-12 19:47:06.883575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.186 [2024-12-12 19:47:06.883679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.186 [2024-12-12 19:47:06.883719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.186 [2024-12-12 19:47:06.883729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:24.186 19:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90660 00:18:24.446 [2024-12-12 19:47:07.163841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.386 19:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:25.386 00:18:25.386 real 0m17.385s 00:18:25.386 user 0m22.729s 00:18:25.386 sys 0m1.725s 00:18:25.386 19:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.386 19:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.386 ************************************ 00:18:25.386 END TEST raid_rebuild_test_sb_md_interleaved 00:18:25.386 ************************************ 00:18:25.646 19:47:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:25.646 19:47:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:25.646 19:47:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90660 ']' 00:18:25.646 19:47:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90660 00:18:25.646 19:47:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:25.646 00:18:25.646 real 11m53.978s 00:18:25.646 user 15m59.011s 00:18:25.646 sys 1m54.645s 00:18:25.646 ************************************ 00:18:25.646 END TEST bdev_raid 00:18:25.646 ************************************ 00:18:25.646 19:47:08 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.646 19:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.646 19:47:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:25.646 19:47:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.646 19:47:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.646 19:47:08 -- common/autotest_common.sh@10 -- # set +x 00:18:25.646 ************************************ 00:18:25.646 START TEST spdkcli_raid 00:18:25.646 ************************************ 00:18:25.646 19:47:08 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:25.646 * Looking for test storage... 00:18:25.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:25.646 19:47:08 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.907 19:47:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.907 --rc genhtml_branch_coverage=1 00:18:25.907 --rc genhtml_function_coverage=1 00:18:25.907 --rc genhtml_legend=1 00:18:25.907 --rc geninfo_all_blocks=1 00:18:25.907 --rc geninfo_unexecuted_blocks=1 00:18:25.907 00:18:25.907 ' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.907 --rc genhtml_branch_coverage=1 00:18:25.907 --rc genhtml_function_coverage=1 00:18:25.907 --rc genhtml_legend=1 00:18:25.907 --rc geninfo_all_blocks=1 00:18:25.907 --rc geninfo_unexecuted_blocks=1 00:18:25.907 00:18:25.907 ' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.907 --rc genhtml_branch_coverage=1 00:18:25.907 --rc genhtml_function_coverage=1 00:18:25.907 --rc genhtml_legend=1 00:18:25.907 --rc geninfo_all_blocks=1 00:18:25.907 --rc geninfo_unexecuted_blocks=1 00:18:25.907 00:18:25.907 ' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.907 --rc genhtml_branch_coverage=1 00:18:25.907 --rc genhtml_function_coverage=1 00:18:25.907 --rc genhtml_legend=1 00:18:25.907 --rc geninfo_all_blocks=1 00:18:25.907 --rc geninfo_unexecuted_blocks=1 00:18:25.907 00:18:25.907 ' 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:25.907 19:47:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=91342 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:25.907 19:47:08 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 91342 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 91342 ']' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.907 19:47:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 [2024-12-12 19:47:08.728378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:25.907 [2024-12-12 19:47:08.728498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91342 ] 00:18:26.168 [2024-12-12 19:47:08.905019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.168 [2024-12-12 19:47:09.009090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.168 [2024-12-12 19:47:09.009113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:27.107 19:47:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 19:47:09 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.107 19:47:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 19:47:09 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:27.107 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:27.107 ' 00:18:29.014 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:29.014 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:29.014 19:47:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:29.014 19:47:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.014 19:47:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.014 19:47:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:29.014 19:47:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.014 19:47:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.014 19:47:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:29.014 ' 00:18:29.953 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:29.953 19:47:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:29.953 19:47:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.953 19:47:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 19:47:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:29.953 19:47:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.953 19:47:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 19:47:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:29.953 19:47:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:30.522 19:47:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:30.522 19:47:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:30.522 19:47:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:30.522 19:47:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.522 19:47:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.522 19:47:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:30.522 19:47:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.522 19:47:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.522 19:47:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:30.522 ' 00:18:31.461 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:31.720 19:47:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:31.720 19:47:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.720 19:47:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.720 19:47:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:31.720 19:47:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.720 19:47:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.720 19:47:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:31.720 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:31.720 ' 00:18:33.099 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:33.099 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:33.359 19:47:15 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:33.359 19:47:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.359 19:47:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.359 19:47:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 91342 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91342 ']' 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91342 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91342 00:18:33.359 killing process with pid 91342 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91342' 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 91342 00:18:33.359 19:47:16 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 91342 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 91342 ']' 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 91342 00:18:35.900 19:47:18 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91342 ']' 00:18:35.900 Process with pid 91342 is not found 00:18:35.900 19:47:18 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91342 00:18:35.900 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91342) - No such process 00:18:35.900 19:47:18 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 91342 is not found' 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:35.900 19:47:18 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:35.900 00:18:35.900 real 0m9.948s 00:18:35.900 user 0m20.448s 00:18:35.900 sys 0m1.166s 00:18:35.900 19:47:18 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.900 19:47:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.900 ************************************ 00:18:35.900 END TEST spdkcli_raid 00:18:35.900 ************************************ 00:18:35.900 19:47:18 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:35.900 19:47:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.900 19:47:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.900 19:47:18 -- common/autotest_common.sh@10 -- # set +x 00:18:35.900 ************************************ 00:18:35.900 START TEST blockdev_raid5f 00:18:35.900 ************************************ 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:35.900 * Looking for test storage... 00:18:35.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.900 19:47:18 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.900 --rc genhtml_branch_coverage=1 00:18:35.900 --rc genhtml_function_coverage=1 00:18:35.900 --rc genhtml_legend=1 00:18:35.900 --rc geninfo_all_blocks=1 00:18:35.900 --rc geninfo_unexecuted_blocks=1 00:18:35.900 00:18:35.900 ' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.900 --rc genhtml_branch_coverage=1 00:18:35.900 --rc genhtml_function_coverage=1 00:18:35.900 --rc genhtml_legend=1 00:18:35.900 --rc geninfo_all_blocks=1 00:18:35.900 --rc geninfo_unexecuted_blocks=1 00:18:35.900 00:18:35.900 ' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.900 --rc genhtml_branch_coverage=1 00:18:35.900 --rc genhtml_function_coverage=1 00:18:35.900 --rc genhtml_legend=1 00:18:35.900 --rc geninfo_all_blocks=1 00:18:35.900 --rc geninfo_unexecuted_blocks=1 00:18:35.900 00:18:35.900 ' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.900 --rc genhtml_branch_coverage=1 00:18:35.900 --rc genhtml_function_coverage=1 00:18:35.900 --rc genhtml_legend=1 00:18:35.900 --rc geninfo_all_blocks=1 00:18:35.900 --rc geninfo_unexecuted_blocks=1 00:18:35.900 00:18:35.900 ' 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=91620 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:35.900 19:47:18 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 91620 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 91620 ']' 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.900 19:47:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.900 [2024-12-12 19:47:18.723920] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:35.900 [2024-12-12 19:47:18.724085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91620 ] 00:18:36.160 [2024-12-12 19:47:18.896956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.160 [2024-12-12 19:47:19.001705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.100 Malloc0 00:18:37.100 Malloc1 00:18:37.100 Malloc2 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:37.100 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.100 19:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.360 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.360 19:47:19 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.360 19:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a0055600-e86e-4073-87fe-b91be24cc4da",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2617c8e0-8339-4fab-8755-91db8ae63344",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0bbec448-eb76-400d-906d-b01d804d4734",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:37.360 19:47:20 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 91620 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 91620 ']' 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 91620 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91620 00:18:37.360 killing process with pid 91620 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91620' 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 91620 00:18:37.360 19:47:20 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 91620 00:18:39.902 19:47:22 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:39.902 19:47:22 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:39.902 19:47:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:39.902 19:47:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.902 19:47:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:39.902 ************************************ 00:18:39.902 START TEST bdev_hello_world 00:18:39.902 ************************************ 00:18:39.902 19:47:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:39.902 [2024-12-12 19:47:22.731066] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:39.902 [2024-12-12 19:47:22.731259] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91683 ] 00:18:40.162 [2024-12-12 19:47:22.907609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.421 [2024-12-12 19:47:23.017975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.992 [2024-12-12 19:47:23.530517] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:40.992 [2024-12-12 19:47:23.530653] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:40.992 [2024-12-12 19:47:23.530687] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:40.992 [2024-12-12 19:47:23.531200] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:40.992 [2024-12-12 19:47:23.531371] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:40.992 [2024-12-12 19:47:23.531423] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:40.992 [2024-12-12 19:47:23.531511] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:40.992 00:18:40.992 [2024-12-12 19:47:23.531582] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:42.375 00:18:42.375 real 0m2.198s 00:18:42.375 user 0m1.817s 00:18:42.375 sys 0m0.258s 00:18:42.375 19:47:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.375 19:47:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:42.375 ************************************ 00:18:42.375 END TEST bdev_hello_world 00:18:42.375 ************************************ 00:18:42.375 19:47:24 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:42.375 19:47:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.375 19:47:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.375 19:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:42.375 ************************************ 00:18:42.375 START TEST bdev_bounds 00:18:42.375 ************************************ 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=91725 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 91725' 00:18:42.375 Process bdevio pid: 91725 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 91725 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 91725 ']' 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.375 19:47:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:42.375 [2024-12-12 19:47:25.006653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:42.376 [2024-12-12 19:47:25.006762] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91725 ] 00:18:42.376 [2024-12-12 19:47:25.180933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.636 [2024-12-12 19:47:25.291955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.636 [2024-12-12 19:47:25.292161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.636 [2024-12-12 19:47:25.292171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.205 19:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.205 19:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:43.205 19:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:43.205 I/O targets: 00:18:43.205 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:43.205 00:18:43.205 00:18:43.205 CUnit - A unit testing framework for C - Version 2.1-3 00:18:43.205 http://cunit.sourceforge.net/ 00:18:43.205 00:18:43.205 00:18:43.205 Suite: bdevio tests on: raid5f 00:18:43.205 Test: blockdev write read block ...passed 00:18:43.205 Test: blockdev write zeroes read block ...passed 00:18:43.205 Test: blockdev write zeroes read no split ...passed 00:18:43.205 Test: blockdev write zeroes read split ...passed 00:18:43.466 Test: blockdev write zeroes read split partial ...passed 00:18:43.466 Test: blockdev reset ...passed 00:18:43.466 Test: blockdev write read 8 blocks ...passed 00:18:43.466 Test: blockdev write read size > 128k ...passed 00:18:43.466 Test: blockdev write read invalid size ...passed 00:18:43.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:43.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:43.466 Test: blockdev write read max offset ...passed 00:18:43.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:43.466 Test: blockdev writev readv 8 blocks ...passed 00:18:43.466 Test: blockdev writev readv 30 x 1block ...passed 00:18:43.466 Test: blockdev writev readv block ...passed 00:18:43.466 Test: blockdev writev readv size > 128k ...passed 00:18:43.466 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:43.466 Test: blockdev comparev and writev ...passed 00:18:43.466 Test: blockdev nvme passthru rw ...passed 00:18:43.466 Test: blockdev nvme passthru vendor specific ...passed 00:18:43.466 Test: blockdev nvme admin passthru ...passed 00:18:43.466 Test: blockdev copy ...passed 00:18:43.466 00:18:43.466 Run Summary: Type Total Ran Passed Failed Inactive 00:18:43.466 suites 1 1 n/a 0 0 00:18:43.466 tests 23 23 23 0 0 00:18:43.466 asserts 130 130 130 0 n/a 00:18:43.466 00:18:43.466 Elapsed time = 0.609 seconds 00:18:43.466 0 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 91725 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 91725 ']' 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 91725 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91725 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91725' 00:18:43.466 killing process with pid 91725 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 91725 00:18:43.466 19:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 91725 00:18:44.847 19:47:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:44.847 00:18:44.847 real 0m2.634s 00:18:44.847 user 0m6.477s 00:18:44.847 sys 0m0.390s 00:18:44.847 19:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.847 19:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:44.847 ************************************ 00:18:44.847 END TEST bdev_bounds 00:18:44.847 ************************************ 00:18:44.847 19:47:27 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:44.848 19:47:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:44.848 19:47:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.848 19:47:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 ************************************ 00:18:44.848 START TEST bdev_nbd 00:18:44.848 ************************************ 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91785 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91785 /var/tmp/spdk-nbd.sock 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91785 ']' 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.848 19:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:45.107 [2024-12-12 19:47:27.735123] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:45.107 [2024-12-12 19:47:27.735333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.107 [2024-12-12 19:47:27.916011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.366 [2024-12-12 19:47:28.023326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:45.934 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.193 1+0 records in 00:18:46.193 1+0 records out 00:18:46.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473195 s, 8.7 MB/s 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:46.193 19:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.193 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:46.193 { 00:18:46.193 "nbd_device": "/dev/nbd0", 00:18:46.193 "bdev_name": "raid5f" 00:18:46.193 } 00:18:46.193 ]' 00:18:46.193 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:46.193 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:46.193 { 00:18:46.193 "nbd_device": "/dev/nbd0", 00:18:46.193 "bdev_name": "raid5f" 00:18:46.193 } 00:18:46.193 ]' 00:18:46.193 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.452 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.712 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:46.971 /dev/nbd0 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.971 1+0 records in 00:18:46.971 1+0 records out 00:18:46.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058243 s, 7.0 MB/s 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.971 19:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:47.231 { 00:18:47.231 "nbd_device": "/dev/nbd0", 00:18:47.231 "bdev_name": "raid5f" 00:18:47.231 } 00:18:47.231 ]' 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:47.231 { 00:18:47.231 "nbd_device": "/dev/nbd0", 00:18:47.231 "bdev_name": "raid5f" 00:18:47.231 } 00:18:47.231 ]' 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:47.231 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:47.490 256+0 records in 00:18:47.490 256+0 records out 00:18:47.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137759 s, 76.1 MB/s 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:47.490 256+0 records in 00:18:47.490 256+0 records out 00:18:47.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376158 s, 27.9 MB/s 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.490 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:47.749 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:48.008 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:48.267 malloc_lvol_verify 00:18:48.267 19:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:48.267 3133f2ff-6dbb-4c20-8e63-b32130c46eb9 00:18:48.267 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:48.526 5e6dd819-d3d5-4b83-8f2b-c7dc35e9b93f 00:18:48.526 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:48.785 /dev/nbd0 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:48.785 mke2fs 1.47.0 (5-Feb-2023) 00:18:48.785 Discarding device blocks: 0/4096 done 00:18:48.785 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:48.785 00:18:48.785 Allocating group tables: 0/1 done 00:18:48.785 Writing inode tables: 0/1 done 00:18:48.785 Creating journal (1024 blocks): done 00:18:48.785 Writing superblocks and filesystem accounting information: 0/1 done 00:18:48.785 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.785 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91785 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91785 ']' 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91785 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91785 00:18:49.045 killing process with pid 91785 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91785' 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91785 00:18:49.045 19:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91785 00:18:50.426 ************************************ 00:18:50.426 END TEST bdev_nbd 00:18:50.426 ************************************ 00:18:50.426 19:47:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:50.426 00:18:50.426 real 0m5.519s 00:18:50.426 user 0m7.426s 00:18:50.426 sys 0m1.341s 00:18:50.426 19:47:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.426 19:47:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:50.426 19:47:33 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:50.426 19:47:33 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:50.426 19:47:33 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:50.426 19:47:33 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:50.426 19:47:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.426 19:47:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.426 19:47:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.426 ************************************ 00:18:50.426 START TEST bdev_fio 00:18:50.426 ************************************ 00:18:50.426 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:50.426 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:50.426 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:50.426 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:50.426 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:50.426 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:50.427 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:50.687 ************************************ 00:18:50.687 START TEST bdev_fio_rw_verify 00:18:50.687 ************************************ 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:50.687 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:50.688 19:47:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.948 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:50.948 fio-3.35 00:18:50.948 Starting 1 thread 00:19:03.243 00:19:03.243 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91994: Thu Dec 12 19:47:44 2024 00:19:03.243 read: IOPS=12.2k, BW=47.8MiB/s (50.2MB/s)(478MiB/10001msec) 00:19:03.243 slat (usec): min=17, max=112, avg=19.44, stdev= 2.02 00:19:03.243 clat (usec): min=11, max=314, avg=131.83, stdev=46.07 00:19:03.243 lat (usec): min=32, max=343, avg=151.27, stdev=46.35 00:19:03.243 clat percentiles (usec): 00:19:03.243 | 50.000th=[ 135], 99.000th=[ 219], 99.900th=[ 253], 99.990th=[ 293], 00:19:03.243 | 99.999th=[ 310] 00:19:03.243 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(497MiB/9872msec); 0 zone resets 00:19:03.243 slat (usec): min=7, max=248, avg=16.17, stdev= 3.98 00:19:03.243 clat (usec): min=58, max=1798, avg=300.38, stdev=50.63 00:19:03.243 lat (usec): min=73, max=1905, avg=316.55, stdev=52.39 00:19:03.243 clat percentiles (usec): 00:19:03.243 | 50.000th=[ 306], 99.000th=[ 392], 99.900th=[ 922], 99.990th=[ 1582], 00:19:03.243 | 99.999th=[ 1778] 00:19:03.243 bw ( KiB/s): min=46296, max=54632, per=98.74%, avg=50906.11, stdev=1795.86, samples=19 00:19:03.243 iops : min=11574, max=13658, avg=12726.53, stdev=448.96, samples=19 00:19:03.243 lat (usec) : 20=0.01%, 50=0.01%, 100=15.53%, 250=39.46%, 500=44.84% 00:19:03.243 lat (usec) : 750=0.09%, 1000=0.04% 00:19:03.243 lat (msec) : 2=0.04% 00:19:03.243 cpu : usr=98.88%, sys=0.38%, ctx=45, majf=0, minf=10071 00:19:03.243 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.243 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.243 issued rwts: total=122486,127238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.243 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.243 00:19:03.243 Run status group 0 (all jobs): 00:19:03.243 READ: bw=47.8MiB/s (50.2MB/s), 47.8MiB/s-47.8MiB/s (50.2MB/s-50.2MB/s), io=478MiB (502MB), run=10001-10001msec 00:19:03.243 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=497MiB (521MB), run=9872-9872msec 00:19:03.503 ----------------------------------------------------- 00:19:03.503 Suppressions used: 00:19:03.503 count bytes template 00:19:03.503 1 7 /usr/src/fio/parse.c 00:19:03.503 935 89760 /usr/src/fio/iolog.c 00:19:03.503 1 8 libtcmalloc_minimal.so 00:19:03.503 1 904 libcrypto.so 00:19:03.503 ----------------------------------------------------- 00:19:03.503 00:19:03.503 00:19:03.503 real 0m12.857s 00:19:03.503 user 0m13.043s 00:19:03.503 sys 0m0.672s 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:03.503 ************************************ 00:19:03.503 END TEST bdev_fio_rw_verify 00:19:03.503 ************************************ 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2c9d9b9d-9cfa-4bb6-b226-cf64b407b0c3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a0055600-e86e-4073-87fe-b91be24cc4da",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2617c8e0-8339-4fab-8755-91db8ae63344",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0bbec448-eb76-400d-906d-b01d804d4734",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:03.503 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:03.763 /home/vagrant/spdk_repo/spdk 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:03.763 00:19:03.763 real 0m13.138s 00:19:03.763 user 0m13.160s 00:19:03.763 sys 0m0.805s 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.763 19:47:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:03.763 ************************************ 00:19:03.763 END TEST bdev_fio 00:19:03.763 ************************************ 00:19:03.763 19:47:46 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:03.763 19:47:46 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:03.763 19:47:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:03.763 19:47:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.763 19:47:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.763 ************************************ 00:19:03.763 START TEST bdev_verify 00:19:03.763 ************************************ 00:19:03.763 19:47:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:03.763 [2024-12-12 19:47:46.527594] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:03.764 [2024-12-12 19:47:46.527708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92157 ] 00:19:04.023 [2024-12-12 19:47:46.707184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:04.023 [2024-12-12 19:47:46.815964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.023 [2024-12-12 19:47:46.815986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.593 Running I/O for 5 seconds... 00:19:06.910 10158.00 IOPS, 39.68 MiB/s [2024-12-12T19:47:50.693Z] 10224.50 IOPS, 39.94 MiB/s [2024-12-12T19:47:51.631Z] 10241.33 IOPS, 40.01 MiB/s [2024-12-12T19:47:52.570Z] 10280.00 IOPS, 40.16 MiB/s [2024-12-12T19:47:52.570Z] 10280.60 IOPS, 40.16 MiB/s 00:19:09.725 Latency(us) 00:19:09.725 [2024-12-12T19:47:52.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.725 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:09.725 Verification LBA range: start 0x0 length 0x2000 00:19:09.725 raid5f : 5.02 4056.63 15.85 0.00 0.00 47418.82 273.66 34113.06 00:19:09.725 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:09.725 Verification LBA range: start 0x2000 length 0x2000 00:19:09.725 raid5f : 5.01 6227.40 24.33 0.00 0.00 30984.31 213.74 22665.73 00:19:09.725 [2024-12-12T19:47:52.570Z] =================================================================================================================== 00:19:09.725 [2024-12-12T19:47:52.570Z] Total : 10284.03 40.17 0.00 0.00 37472.49 213.74 34113.06 00:19:11.106 00:19:11.106 real 0m7.247s 00:19:11.106 user 0m13.405s 00:19:11.106 sys 0m0.276s 00:19:11.106 19:47:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.106 19:47:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:11.106 ************************************ 00:19:11.106 END TEST bdev_verify 00:19:11.106 ************************************ 00:19:11.106 19:47:53 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:11.106 19:47:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:11.106 19:47:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.106 19:47:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:11.106 ************************************ 00:19:11.106 START TEST bdev_verify_big_io 00:19:11.106 ************************************ 00:19:11.106 19:47:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:11.106 [2024-12-12 19:47:53.847758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:11.106 [2024-12-12 19:47:53.847874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92251 ] 00:19:11.366 [2024-12-12 19:47:54.024494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:11.366 [2024-12-12 19:47:54.133033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.366 [2024-12-12 19:47:54.133060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.936 Running I/O for 5 seconds... 00:19:14.254 633.00 IOPS, 39.56 MiB/s [2024-12-12T19:47:58.038Z] 728.50 IOPS, 45.53 MiB/s [2024-12-12T19:47:58.977Z] 719.00 IOPS, 44.94 MiB/s [2024-12-12T19:47:59.917Z] 745.25 IOPS, 46.58 MiB/s [2024-12-12T19:48:00.176Z] 761.20 IOPS, 47.58 MiB/s 00:19:17.331 Latency(us) 00:19:17.331 [2024-12-12T19:48:00.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.331 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:17.331 Verification LBA range: start 0x0 length 0x200 00:19:17.331 raid5f : 5.33 333.40 20.84 0.00 0.00 9537207.46 224.48 399283.09 00:19:17.331 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:17.331 Verification LBA range: start 0x200 length 0x200 00:19:17.331 raid5f : 5.24 436.18 27.26 0.00 0.00 7375687.67 194.07 313199.12 00:19:17.331 [2024-12-12T19:48:00.176Z] =================================================================================================================== 00:19:17.331 [2024-12-12T19:48:00.176Z] Total : 769.58 48.10 0.00 0.00 8320753.93 194.07 399283.09 00:19:18.714 ************************************ 00:19:18.714 END TEST bdev_verify_big_io 00:19:18.714 ************************************ 00:19:18.714 00:19:18.714 real 0m7.576s 00:19:18.714 user 0m14.038s 00:19:18.714 sys 0m0.291s 00:19:18.714 19:48:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.714 19:48:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.714 19:48:01 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:18.714 19:48:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:18.714 19:48:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.714 19:48:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.714 ************************************ 00:19:18.714 START TEST bdev_write_zeroes 00:19:18.714 ************************************ 00:19:18.714 19:48:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:18.714 [2024-12-12 19:48:01.500392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:18.714 [2024-12-12 19:48:01.500511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92355 ] 00:19:18.974 [2024-12-12 19:48:01.679812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.974 [2024-12-12 19:48:01.785138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.543 Running I/O for 1 seconds... 00:19:20.480 29487.00 IOPS, 115.18 MiB/s 00:19:20.480 Latency(us) 00:19:20.480 [2024-12-12T19:48:03.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.480 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:20.480 raid5f : 1.01 29469.14 115.11 0.00 0.00 4331.22 1438.07 5838.14 00:19:20.480 [2024-12-12T19:48:03.325Z] =================================================================================================================== 00:19:20.480 [2024-12-12T19:48:03.325Z] Total : 29469.14 115.11 0.00 0.00 4331.22 1438.07 5838.14 00:19:21.860 ************************************ 00:19:21.860 END TEST bdev_write_zeroes 00:19:21.860 ************************************ 00:19:21.860 00:19:21.860 real 0m3.225s 00:19:21.860 user 0m2.832s 00:19:21.860 sys 0m0.264s 00:19:21.860 19:48:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.860 19:48:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:21.860 19:48:04 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.860 19:48:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:21.860 19:48:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.860 19:48:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.860 ************************************ 00:19:21.860 START TEST bdev_json_nonenclosed 00:19:21.860 ************************************ 00:19:21.860 19:48:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.119 [2024-12-12 19:48:04.791725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:22.119 [2024-12-12 19:48:04.791842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92411 ] 00:19:22.388 [2024-12-12 19:48:04.967716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.388 [2024-12-12 19:48:05.071240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.388 [2024-12-12 19:48:05.071331] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:22.388 [2024-12-12 19:48:05.071356] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:22.388 [2024-12-12 19:48:05.071366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:22.647 00:19:22.647 real 0m0.607s 00:19:22.647 user 0m0.369s 00:19:22.647 sys 0m0.133s 00:19:22.647 ************************************ 00:19:22.647 END TEST bdev_json_nonenclosed 00:19:22.647 ************************************ 00:19:22.647 19:48:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.647 19:48:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:22.647 19:48:05 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.647 19:48:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:22.647 19:48:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.647 19:48:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:22.647 ************************************ 00:19:22.647 START TEST bdev_json_nonarray 00:19:22.647 ************************************ 00:19:22.647 19:48:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.647 [2024-12-12 19:48:05.477995] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:22.647 [2024-12-12 19:48:05.478107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92438 ] 00:19:22.906 [2024-12-12 19:48:05.653535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.166 [2024-12-12 19:48:05.768473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.166 [2024-12-12 19:48:05.768611] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:23.166 [2024-12-12 19:48:05.768630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:23.166 [2024-12-12 19:48:05.768648] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:23.166 00:19:23.166 real 0m0.624s 00:19:23.166 user 0m0.372s 00:19:23.166 sys 0m0.146s 00:19:23.166 19:48:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.166 ************************************ 00:19:23.166 END TEST bdev_json_nonarray 00:19:23.166 ************************************ 00:19:23.166 19:48:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:23.432 19:48:06 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:23.432 00:19:23.432 real 0m47.698s 00:19:23.432 user 1m4.173s 00:19:23.432 sys 0m5.031s 00:19:23.432 19:48:06 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.432 ************************************ 00:19:23.432 END TEST blockdev_raid5f 00:19:23.432 ************************************ 00:19:23.432 19:48:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 19:48:06 -- spdk/autotest.sh@194 -- # uname -s 00:19:23.432 19:48:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:23.432 19:48:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.432 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 19:48:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:23.432 19:48:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:23.432 19:48:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:23.432 19:48:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:23.432 19:48:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.432 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 19:48:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:23.432 19:48:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:23.432 19:48:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:23.432 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:25.971 INFO: APP EXITING 00:19:25.971 INFO: killing all VMs 00:19:25.971 INFO: killing vhost app 00:19:25.971 INFO: EXIT DONE 00:19:26.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.231 Waiting for block devices as requested 00:19:26.490 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.490 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:27.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:27.431 Cleaning 00:19:27.431 Removing: /var/run/dpdk/spdk0/config 00:19:27.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:27.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:27.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:27.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:27.431 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:27.431 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:27.431 Removing: /dev/shm/spdk_tgt_trace.pid58669 00:19:27.431 Removing: /var/run/dpdk/spdk0 00:19:27.431 Removing: /var/run/dpdk/spdk_pid58429 00:19:27.431 Removing: /var/run/dpdk/spdk_pid58669 00:19:27.431 Removing: /var/run/dpdk/spdk_pid58898 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59008 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59064 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59192 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59210 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59420 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59535 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59643 00:19:27.431 Removing: /var/run/dpdk/spdk_pid59766 00:19:27.691 Removing: /var/run/dpdk/spdk_pid59874 00:19:27.691 Removing: /var/run/dpdk/spdk_pid59914 00:19:27.691 Removing: /var/run/dpdk/spdk_pid59951 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60026 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60138 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60580 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60650 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60729 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60745 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60895 00:19:27.691 Removing: /var/run/dpdk/spdk_pid60911 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61057 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61078 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61142 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61160 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61230 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61248 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61443 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61485 00:19:27.691 Removing: /var/run/dpdk/spdk_pid61574 00:19:27.691 Removing: /var/run/dpdk/spdk_pid62913 00:19:27.691 Removing: /var/run/dpdk/spdk_pid63124 00:19:27.691 Removing: /var/run/dpdk/spdk_pid63264 00:19:27.691 Removing: /var/run/dpdk/spdk_pid63902 00:19:27.691 Removing: /var/run/dpdk/spdk_pid64114 00:19:27.691 Removing: /var/run/dpdk/spdk_pid64254 00:19:27.691 Removing: /var/run/dpdk/spdk_pid64897 00:19:27.691 Removing: /var/run/dpdk/spdk_pid65227 00:19:27.691 Removing: /var/run/dpdk/spdk_pid65373 00:19:27.691 Removing: /var/run/dpdk/spdk_pid66747 00:19:27.691 Removing: /var/run/dpdk/spdk_pid67011 00:19:27.691 Removing: /var/run/dpdk/spdk_pid67151 00:19:27.691 Removing: /var/run/dpdk/spdk_pid68546 00:19:27.691 Removing: /var/run/dpdk/spdk_pid68799 00:19:27.691 Removing: /var/run/dpdk/spdk_pid68945 00:19:27.691 Removing: /var/run/dpdk/spdk_pid70338 00:19:27.691 Removing: /var/run/dpdk/spdk_pid70784 00:19:27.691 Removing: /var/run/dpdk/spdk_pid70932 00:19:27.691 Removing: /var/run/dpdk/spdk_pid72411 00:19:27.691 Removing: /var/run/dpdk/spdk_pid72681 00:19:27.691 Removing: /var/run/dpdk/spdk_pid72827 00:19:27.691 Removing: /var/run/dpdk/spdk_pid74317 00:19:27.691 Removing: /var/run/dpdk/spdk_pid74587 00:19:27.691 Removing: /var/run/dpdk/spdk_pid74733 00:19:27.691 Removing: /var/run/dpdk/spdk_pid76224 00:19:27.691 Removing: /var/run/dpdk/spdk_pid76711 00:19:27.691 Removing: /var/run/dpdk/spdk_pid76857 00:19:27.691 Removing: /var/run/dpdk/spdk_pid77006 00:19:27.691 Removing: /var/run/dpdk/spdk_pid77425 00:19:27.691 Removing: /var/run/dpdk/spdk_pid78155 00:19:27.691 Removing: /var/run/dpdk/spdk_pid78544 00:19:27.691 Removing: /var/run/dpdk/spdk_pid79237 00:19:27.691 Removing: /var/run/dpdk/spdk_pid79682 00:19:27.691 Removing: /var/run/dpdk/spdk_pid80441 00:19:27.691 Removing: /var/run/dpdk/spdk_pid80848 00:19:27.691 Removing: /var/run/dpdk/spdk_pid82810 00:19:27.951 Removing: /var/run/dpdk/spdk_pid83259 00:19:27.951 Removing: /var/run/dpdk/spdk_pid83695 00:19:27.951 Removing: /var/run/dpdk/spdk_pid85779 00:19:27.951 Removing: /var/run/dpdk/spdk_pid86271 00:19:27.951 Removing: /var/run/dpdk/spdk_pid86764 00:19:27.951 Removing: /var/run/dpdk/spdk_pid87821 00:19:27.951 Removing: /var/run/dpdk/spdk_pid88144 00:19:27.951 Removing: /var/run/dpdk/spdk_pid89082 00:19:27.951 Removing: /var/run/dpdk/spdk_pid89405 00:19:27.951 Removing: /var/run/dpdk/spdk_pid90339 00:19:27.951 Removing: /var/run/dpdk/spdk_pid90660 00:19:27.951 Removing: /var/run/dpdk/spdk_pid91342 00:19:27.951 Removing: /var/run/dpdk/spdk_pid91620 00:19:27.951 Removing: /var/run/dpdk/spdk_pid91683 00:19:27.951 Removing: /var/run/dpdk/spdk_pid91725 00:19:27.951 Removing: /var/run/dpdk/spdk_pid91979 00:19:27.951 Removing: /var/run/dpdk/spdk_pid92157 00:19:27.951 Removing: /var/run/dpdk/spdk_pid92251 00:19:27.951 Removing: /var/run/dpdk/spdk_pid92355 00:19:27.951 Removing: /var/run/dpdk/spdk_pid92411 00:19:27.951 Removing: /var/run/dpdk/spdk_pid92438 00:19:27.951 Clean 00:19:27.951 19:48:10 -- common/autotest_common.sh@1453 -- # return 0 00:19:27.951 19:48:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:27.951 19:48:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.951 19:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:27.951 19:48:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:27.951 19:48:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.951 19:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 19:48:10 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:28.212 19:48:10 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:28.212 19:48:10 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:28.212 19:48:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:28.212 19:48:10 -- spdk/autotest.sh@398 -- # hostname 00:19:28.212 19:48:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:28.212 geninfo: WARNING: invalid characters removed from testname! 00:19:54.786 19:48:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:56.168 19:48:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:58.076 19:48:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:59.984 19:48:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:01.893 19:48:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:03.803 19:48:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:06.345 19:48:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:06.345 19:48:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:06.345 19:48:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:06.345 19:48:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:06.345 19:48:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:06.345 19:48:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:06.345 + [[ -n 5429 ]] 00:20:06.345 + sudo kill 5429 00:20:06.355 [Pipeline] } 00:20:06.370 [Pipeline] // timeout 00:20:06.375 [Pipeline] } 00:20:06.389 [Pipeline] // stage 00:20:06.394 [Pipeline] } 00:20:06.409 [Pipeline] // catchError 00:20:06.418 [Pipeline] stage 00:20:06.420 [Pipeline] { (Stop VM) 00:20:06.432 [Pipeline] sh 00:20:06.716 + vagrant halt 00:20:09.257 ==> default: Halting domain... 00:20:17.403 [Pipeline] sh 00:20:17.686 + vagrant destroy -f 00:20:20.225 ==> default: Removing domain... 00:20:20.238 [Pipeline] sh 00:20:20.524 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:20.534 [Pipeline] } 00:20:20.548 [Pipeline] // stage 00:20:20.553 [Pipeline] } 00:20:20.567 [Pipeline] // dir 00:20:20.572 [Pipeline] } 00:20:20.587 [Pipeline] // wrap 00:20:20.593 [Pipeline] } 00:20:20.606 [Pipeline] // catchError 00:20:20.615 [Pipeline] stage 00:20:20.617 [Pipeline] { (Epilogue) 00:20:20.629 [Pipeline] sh 00:20:20.914 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:25.127 [Pipeline] catchError 00:20:25.129 [Pipeline] { 00:20:25.141 [Pipeline] sh 00:20:25.438 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:25.438 Artifacts sizes are good 00:20:25.506 [Pipeline] } 00:20:25.519 [Pipeline] // catchError 00:20:25.529 [Pipeline] archiveArtifacts 00:20:25.537 Archiving artifacts 00:20:25.633 [Pipeline] cleanWs 00:20:25.646 [WS-CLEANUP] Deleting project workspace... 00:20:25.646 [WS-CLEANUP] Deferred wipeout is used... 00:20:25.653 [WS-CLEANUP] done 00:20:25.655 [Pipeline] } 00:20:25.670 [Pipeline] // stage 00:20:25.675 [Pipeline] } 00:20:25.688 [Pipeline] // node 00:20:25.693 [Pipeline] End of Pipeline 00:20:25.770 Finished: SUCCESS